Test Report: KVM_Linux_crio 19355

                    
                      6d23947514fd7a389789fed180382829b6444229:2024-07-31:35588
                    
                

Test fail (30/326)

Order failed test Duration
43 TestAddons/parallel/Ingress 153.18
45 TestAddons/parallel/MetricsServer 358.48
54 TestAddons/StoppedEnableDisable 154.44
151 TestFunctional/parallel/ImageCommands/ImageBuild 9.49
173 TestMultiControlPlane/serial/StopSecondaryNode 141.95
175 TestMultiControlPlane/serial/RestartSecondaryNode 58.23
177 TestMultiControlPlane/serial/RestartClusterKeepsNodes 397.71
180 TestMultiControlPlane/serial/StopCluster 141.85
240 TestMultiNode/serial/RestartKeepsNodes 332.04
242 TestMultiNode/serial/StopMultiNode 141.28
249 TestPreload 256.7
257 TestKubernetesUpgrade 432.26
329 TestStartStop/group/old-k8s-version/serial/FirstStart 320.83
354 TestStartStop/group/embed-certs/serial/Stop 139.04
357 TestStartStop/group/no-preload/serial/Stop 138.91
360 TestStartStop/group/default-k8s-diff-port/serial/Stop 138.98
361 TestStartStop/group/old-k8s-version/serial/DeployApp 0.47
362 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 107.82
363 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
365 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
366 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
371 TestStartStop/group/old-k8s-version/serial/SecondStart 740.4
372 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.31
373 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.41
374 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.32
375 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.49
376 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 435.14
377 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 326.97
378 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 384.33
379 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 88.9
x
+
TestAddons/parallel/Ingress (153.18s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-715925 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-715925 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-715925 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [8401d1a8-6dd2-40c9-8e23-deb823f5b208] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [8401d1a8-6dd2-40c9-8e23-deb823f5b208] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004055125s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-715925 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-715925 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.357955034s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-715925 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-715925 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.147
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-715925 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-715925 addons disable ingress-dns --alsologtostderr -v=1: (1.224606666s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-715925 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-715925 addons disable ingress --alsologtostderr -v=1: (7.682948862s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-715925 -n addons-715925
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-715925 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-715925 logs -n 25: (1.209979198s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-430731                                                                     | download-only-430731 | jenkins | v1.33.1 | 31 Jul 24 19:29 UTC | 31 Jul 24 19:29 UTC |
	| delete  | -p download-only-373672                                                                     | download-only-373672 | jenkins | v1.33.1 | 31 Jul 24 19:29 UTC | 31 Jul 24 19:29 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-281803 | jenkins | v1.33.1 | 31 Jul 24 19:29 UTC |                     |
	|         | binary-mirror-281803                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:37353                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-281803                                                                     | binary-mirror-281803 | jenkins | v1.33.1 | 31 Jul 24 19:29 UTC | 31 Jul 24 19:29 UTC |
	| addons  | enable dashboard -p                                                                         | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:29 UTC |                     |
	|         | addons-715925                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:29 UTC |                     |
	|         | addons-715925                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-715925 --wait=true                                                                | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:29 UTC | 31 Jul 24 19:32 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-715925 addons disable                                                                | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:32 UTC | 31 Jul 24 19:32 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-715925 addons disable                                                                | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:32 UTC | 31 Jul 24 19:33 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:33 UTC | 31 Jul 24 19:33 UTC |
	|         | -p addons-715925                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-715925 ssh cat                                                                       | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:33 UTC | 31 Jul 24 19:33 UTC |
	|         | /opt/local-path-provisioner/pvc-7abc566a-0469-49d9-9aef-8963a9d00867_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-715925 addons disable                                                                | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:33 UTC | 31 Jul 24 19:33 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-715925 ip                                                                            | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:33 UTC | 31 Jul 24 19:33 UTC |
	| addons  | addons-715925 addons disable                                                                | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:33 UTC | 31 Jul 24 19:33 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:33 UTC | 31 Jul 24 19:33 UTC |
	|         | addons-715925                                                                               |                      |         |         |                     |                     |
	| addons  | addons-715925 addons disable                                                                | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:33 UTC | 31 Jul 24 19:33 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:33 UTC | 31 Jul 24 19:33 UTC |
	|         | -p addons-715925                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-715925 addons                                                                        | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:33 UTC | 31 Jul 24 19:33 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-715925 addons                                                                        | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:33 UTC | 31 Jul 24 19:33 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:33 UTC | 31 Jul 24 19:33 UTC |
	|         | addons-715925                                                                               |                      |         |         |                     |                     |
	| addons  | addons-715925 addons disable                                                                | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:33 UTC | 31 Jul 24 19:33 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-715925 ssh curl -s                                                                   | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:34 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-715925 ip                                                                            | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:36 UTC | 31 Jul 24 19:36 UTC |
	| addons  | addons-715925 addons disable                                                                | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:36 UTC | 31 Jul 24 19:36 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-715925 addons disable                                                                | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:36 UTC | 31 Jul 24 19:36 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 19:29:04
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 19:29:04.417251  130103 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:29:04.417370  130103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:29:04.417382  130103 out.go:304] Setting ErrFile to fd 2...
	I0731 19:29:04.417389  130103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:29:04.417595  130103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 19:29:04.418243  130103 out.go:298] Setting JSON to false
	I0731 19:29:04.419592  130103 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4280,"bootTime":1722449864,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 19:29:04.419661  130103 start.go:139] virtualization: kvm guest
	I0731 19:29:04.421751  130103 out.go:177] * [addons-715925] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 19:29:04.423095  130103 notify.go:220] Checking for updates...
	I0731 19:29:04.423108  130103 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 19:29:04.424472  130103 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 19:29:04.425899  130103 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 19:29:04.427241  130103 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 19:29:04.428556  130103 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 19:29:04.429886  130103 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 19:29:04.431272  130103 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 19:29:04.463327  130103 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 19:29:04.464717  130103 start.go:297] selected driver: kvm2
	I0731 19:29:04.464732  130103 start.go:901] validating driver "kvm2" against <nil>
	I0731 19:29:04.464744  130103 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 19:29:04.465508  130103 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:29:04.465582  130103 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-121704/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 19:29:04.480999  130103 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 19:29:04.481056  130103 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 19:29:04.481303  130103 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 19:29:04.481395  130103 cni.go:84] Creating CNI manager for ""
	I0731 19:29:04.481414  130103 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 19:29:04.481424  130103 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 19:29:04.481503  130103 start.go:340] cluster config:
	{Name:addons-715925 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-715925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:29:04.481623  130103 iso.go:125] acquiring lock: {Name:mk69fc0fe37180b19ce91307b5e12ab2d0bd69fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:29:04.483599  130103 out.go:177] * Starting "addons-715925" primary control-plane node in "addons-715925" cluster
	I0731 19:29:04.484961  130103 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 19:29:04.485000  130103 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 19:29:04.485013  130103 cache.go:56] Caching tarball of preloaded images
	I0731 19:29:04.485102  130103 preload.go:172] Found /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 19:29:04.485130  130103 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 19:29:04.485499  130103 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/config.json ...
	I0731 19:29:04.485525  130103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/config.json: {Name:mk727355046b816e37cdce50043b5ec4432c4fe4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:29:04.485709  130103 start.go:360] acquireMachinesLock for addons-715925: {Name:mk0ee20c9dba367bb5e62f6affdfe6f589095d2a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 19:29:04.485769  130103 start.go:364] duration metric: took 44.002µs to acquireMachinesLock for "addons-715925"
	I0731 19:29:04.485792  130103 start.go:93] Provisioning new machine with config: &{Name:addons-715925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-715925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 19:29:04.485873  130103 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 19:29:04.487578  130103 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0731 19:29:04.487760  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:04.487812  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:04.502642  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41313
	I0731 19:29:04.503097  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:04.503685  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:04.503708  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:04.504086  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:04.504301  130103 main.go:141] libmachine: (addons-715925) Calling .GetMachineName
	I0731 19:29:04.504446  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:04.504802  130103 start.go:159] libmachine.API.Create for "addons-715925" (driver="kvm2")
	I0731 19:29:04.504853  130103 client.go:168] LocalClient.Create starting
	I0731 19:29:04.504895  130103 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem
	I0731 19:29:04.680626  130103 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem
	I0731 19:29:04.776070  130103 main.go:141] libmachine: Running pre-create checks...
	I0731 19:29:04.776094  130103 main.go:141] libmachine: (addons-715925) Calling .PreCreateCheck
	I0731 19:29:04.776623  130103 main.go:141] libmachine: (addons-715925) Calling .GetConfigRaw
	I0731 19:29:04.777047  130103 main.go:141] libmachine: Creating machine...
	I0731 19:29:04.777060  130103 main.go:141] libmachine: (addons-715925) Calling .Create
	I0731 19:29:04.777193  130103 main.go:141] libmachine: (addons-715925) Creating KVM machine...
	I0731 19:29:04.778711  130103 main.go:141] libmachine: (addons-715925) DBG | found existing default KVM network
	I0731 19:29:04.779877  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:04.779717  130125 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012fad0}
	I0731 19:29:04.779937  130103 main.go:141] libmachine: (addons-715925) DBG | created network xml: 
	I0731 19:29:04.779956  130103 main.go:141] libmachine: (addons-715925) DBG | <network>
	I0731 19:29:04.779967  130103 main.go:141] libmachine: (addons-715925) DBG |   <name>mk-addons-715925</name>
	I0731 19:29:04.779978  130103 main.go:141] libmachine: (addons-715925) DBG |   <dns enable='no'/>
	I0731 19:29:04.779987  130103 main.go:141] libmachine: (addons-715925) DBG |   
	I0731 19:29:04.779996  130103 main.go:141] libmachine: (addons-715925) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0731 19:29:04.780008  130103 main.go:141] libmachine: (addons-715925) DBG |     <dhcp>
	I0731 19:29:04.780013  130103 main.go:141] libmachine: (addons-715925) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0731 19:29:04.780022  130103 main.go:141] libmachine: (addons-715925) DBG |     </dhcp>
	I0731 19:29:04.780030  130103 main.go:141] libmachine: (addons-715925) DBG |   </ip>
	I0731 19:29:04.780054  130103 main.go:141] libmachine: (addons-715925) DBG |   
	I0731 19:29:04.780071  130103 main.go:141] libmachine: (addons-715925) DBG | </network>
	I0731 19:29:04.780110  130103 main.go:141] libmachine: (addons-715925) DBG | 
	I0731 19:29:04.785443  130103 main.go:141] libmachine: (addons-715925) DBG | trying to create private KVM network mk-addons-715925 192.168.39.0/24...
	I0731 19:29:04.850545  130103 main.go:141] libmachine: (addons-715925) Setting up store path in /home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925 ...
	I0731 19:29:04.850579  130103 main.go:141] libmachine: (addons-715925) Building disk image from file:///home/jenkins/minikube-integration/19355-121704/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0731 19:29:04.850601  130103 main.go:141] libmachine: (addons-715925) DBG | private KVM network mk-addons-715925 192.168.39.0/24 created
	I0731 19:29:04.850629  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:04.850489  130125 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 19:29:04.850679  130103 main.go:141] libmachine: (addons-715925) Downloading /home/jenkins/minikube-integration/19355-121704/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19355-121704/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0731 19:29:05.139561  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:05.139426  130125 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa...
	I0731 19:29:05.268204  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:05.268030  130125 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/addons-715925.rawdisk...
	I0731 19:29:05.268240  130103 main.go:141] libmachine: (addons-715925) DBG | Writing magic tar header
	I0731 19:29:05.268254  130103 main.go:141] libmachine: (addons-715925) DBG | Writing SSH key tar header
	I0731 19:29:05.268267  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:05.268158  130125 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925 ...
	I0731 19:29:05.268292  130103 main.go:141] libmachine: (addons-715925) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925
	I0731 19:29:05.268309  130103 main.go:141] libmachine: (addons-715925) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704/.minikube/machines
	I0731 19:29:05.268321  130103 main.go:141] libmachine: (addons-715925) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925 (perms=drwx------)
	I0731 19:29:05.268359  130103 main.go:141] libmachine: (addons-715925) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 19:29:05.268437  130103 main.go:141] libmachine: (addons-715925) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704/.minikube/machines (perms=drwxr-xr-x)
	I0731 19:29:05.268453  130103 main.go:141] libmachine: (addons-715925) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704
	I0731 19:29:05.268489  130103 main.go:141] libmachine: (addons-715925) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 19:29:05.268505  130103 main.go:141] libmachine: (addons-715925) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704/.minikube (perms=drwxr-xr-x)
	I0731 19:29:05.268514  130103 main.go:141] libmachine: (addons-715925) DBG | Checking permissions on dir: /home/jenkins
	I0731 19:29:05.268529  130103 main.go:141] libmachine: (addons-715925) DBG | Checking permissions on dir: /home
	I0731 19:29:05.268543  130103 main.go:141] libmachine: (addons-715925) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704 (perms=drwxrwxr-x)
	I0731 19:29:05.268555  130103 main.go:141] libmachine: (addons-715925) DBG | Skipping /home - not owner
	I0731 19:29:05.268576  130103 main.go:141] libmachine: (addons-715925) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 19:29:05.268589  130103 main.go:141] libmachine: (addons-715925) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 19:29:05.268606  130103 main.go:141] libmachine: (addons-715925) Creating domain...
	I0731 19:29:05.269420  130103 main.go:141] libmachine: (addons-715925) define libvirt domain using xml: 
	I0731 19:29:05.269451  130103 main.go:141] libmachine: (addons-715925) <domain type='kvm'>
	I0731 19:29:05.269462  130103 main.go:141] libmachine: (addons-715925)   <name>addons-715925</name>
	I0731 19:29:05.269473  130103 main.go:141] libmachine: (addons-715925)   <memory unit='MiB'>4000</memory>
	I0731 19:29:05.269482  130103 main.go:141] libmachine: (addons-715925)   <vcpu>2</vcpu>
	I0731 19:29:05.269488  130103 main.go:141] libmachine: (addons-715925)   <features>
	I0731 19:29:05.269497  130103 main.go:141] libmachine: (addons-715925)     <acpi/>
	I0731 19:29:05.269506  130103 main.go:141] libmachine: (addons-715925)     <apic/>
	I0731 19:29:05.269517  130103 main.go:141] libmachine: (addons-715925)     <pae/>
	I0731 19:29:05.269525  130103 main.go:141] libmachine: (addons-715925)     
	I0731 19:29:05.269535  130103 main.go:141] libmachine: (addons-715925)   </features>
	I0731 19:29:05.269545  130103 main.go:141] libmachine: (addons-715925)   <cpu mode='host-passthrough'>
	I0731 19:29:05.269576  130103 main.go:141] libmachine: (addons-715925)   
	I0731 19:29:05.269597  130103 main.go:141] libmachine: (addons-715925)   </cpu>
	I0731 19:29:05.269607  130103 main.go:141] libmachine: (addons-715925)   <os>
	I0731 19:29:05.269616  130103 main.go:141] libmachine: (addons-715925)     <type>hvm</type>
	I0731 19:29:05.269626  130103 main.go:141] libmachine: (addons-715925)     <boot dev='cdrom'/>
	I0731 19:29:05.269641  130103 main.go:141] libmachine: (addons-715925)     <boot dev='hd'/>
	I0731 19:29:05.269654  130103 main.go:141] libmachine: (addons-715925)     <bootmenu enable='no'/>
	I0731 19:29:05.269665  130103 main.go:141] libmachine: (addons-715925)   </os>
	I0731 19:29:05.269677  130103 main.go:141] libmachine: (addons-715925)   <devices>
	I0731 19:29:05.269686  130103 main.go:141] libmachine: (addons-715925)     <disk type='file' device='cdrom'>
	I0731 19:29:05.269704  130103 main.go:141] libmachine: (addons-715925)       <source file='/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/boot2docker.iso'/>
	I0731 19:29:05.269727  130103 main.go:141] libmachine: (addons-715925)       <target dev='hdc' bus='scsi'/>
	I0731 19:29:05.269739  130103 main.go:141] libmachine: (addons-715925)       <readonly/>
	I0731 19:29:05.269750  130103 main.go:141] libmachine: (addons-715925)     </disk>
	I0731 19:29:05.269763  130103 main.go:141] libmachine: (addons-715925)     <disk type='file' device='disk'>
	I0731 19:29:05.269791  130103 main.go:141] libmachine: (addons-715925)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 19:29:05.269807  130103 main.go:141] libmachine: (addons-715925)       <source file='/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/addons-715925.rawdisk'/>
	I0731 19:29:05.269822  130103 main.go:141] libmachine: (addons-715925)       <target dev='hda' bus='virtio'/>
	I0731 19:29:05.269832  130103 main.go:141] libmachine: (addons-715925)     </disk>
	I0731 19:29:05.269843  130103 main.go:141] libmachine: (addons-715925)     <interface type='network'>
	I0731 19:29:05.269854  130103 main.go:141] libmachine: (addons-715925)       <source network='mk-addons-715925'/>
	I0731 19:29:05.269864  130103 main.go:141] libmachine: (addons-715925)       <model type='virtio'/>
	I0731 19:29:05.269874  130103 main.go:141] libmachine: (addons-715925)     </interface>
	I0731 19:29:05.269889  130103 main.go:141] libmachine: (addons-715925)     <interface type='network'>
	I0731 19:29:05.269906  130103 main.go:141] libmachine: (addons-715925)       <source network='default'/>
	I0731 19:29:05.269919  130103 main.go:141] libmachine: (addons-715925)       <model type='virtio'/>
	I0731 19:29:05.269929  130103 main.go:141] libmachine: (addons-715925)     </interface>
	I0731 19:29:05.269938  130103 main.go:141] libmachine: (addons-715925)     <serial type='pty'>
	I0731 19:29:05.269948  130103 main.go:141] libmachine: (addons-715925)       <target port='0'/>
	I0731 19:29:05.269956  130103 main.go:141] libmachine: (addons-715925)     </serial>
	I0731 19:29:05.269972  130103 main.go:141] libmachine: (addons-715925)     <console type='pty'>
	I0731 19:29:05.269983  130103 main.go:141] libmachine: (addons-715925)       <target type='serial' port='0'/>
	I0731 19:29:05.269993  130103 main.go:141] libmachine: (addons-715925)     </console>
	I0731 19:29:05.270005  130103 main.go:141] libmachine: (addons-715925)     <rng model='virtio'>
	I0731 19:29:05.270017  130103 main.go:141] libmachine: (addons-715925)       <backend model='random'>/dev/random</backend>
	I0731 19:29:05.270027  130103 main.go:141] libmachine: (addons-715925)     </rng>
	I0731 19:29:05.270048  130103 main.go:141] libmachine: (addons-715925)     
	I0731 19:29:05.270058  130103 main.go:141] libmachine: (addons-715925)     
	I0731 19:29:05.270064  130103 main.go:141] libmachine: (addons-715925)   </devices>
	I0731 19:29:05.270072  130103 main.go:141] libmachine: (addons-715925) </domain>
	I0731 19:29:05.270082  130103 main.go:141] libmachine: (addons-715925) 
	I0731 19:29:05.276041  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:21:b5:e6 in network default
	I0731 19:29:05.276617  130103 main.go:141] libmachine: (addons-715925) Ensuring networks are active...
	I0731 19:29:05.276637  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:05.277270  130103 main.go:141] libmachine: (addons-715925) Ensuring network default is active
	I0731 19:29:05.277574  130103 main.go:141] libmachine: (addons-715925) Ensuring network mk-addons-715925 is active
	I0731 19:29:05.278004  130103 main.go:141] libmachine: (addons-715925) Getting domain xml...
	I0731 19:29:05.278544  130103 main.go:141] libmachine: (addons-715925) Creating domain...
	I0731 19:29:06.674479  130103 main.go:141] libmachine: (addons-715925) Waiting to get IP...
	I0731 19:29:06.675364  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:06.675850  130103 main.go:141] libmachine: (addons-715925) DBG | unable to find current IP address of domain addons-715925 in network mk-addons-715925
	I0731 19:29:06.675926  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:06.675865  130125 retry.go:31] will retry after 271.598681ms: waiting for machine to come up
	I0731 19:29:06.949597  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:06.950140  130103 main.go:141] libmachine: (addons-715925) DBG | unable to find current IP address of domain addons-715925 in network mk-addons-715925
	I0731 19:29:06.950171  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:06.950093  130125 retry.go:31] will retry after 283.757518ms: waiting for machine to come up
	I0731 19:29:07.235357  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:07.235799  130103 main.go:141] libmachine: (addons-715925) DBG | unable to find current IP address of domain addons-715925 in network mk-addons-715925
	I0731 19:29:07.235822  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:07.235733  130125 retry.go:31] will retry after 434.066918ms: waiting for machine to come up
	I0731 19:29:07.671315  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:07.671715  130103 main.go:141] libmachine: (addons-715925) DBG | unable to find current IP address of domain addons-715925 in network mk-addons-715925
	I0731 19:29:07.671742  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:07.671674  130125 retry.go:31] will retry after 454.225101ms: waiting for machine to come up
	I0731 19:29:08.128266  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:08.128670  130103 main.go:141] libmachine: (addons-715925) DBG | unable to find current IP address of domain addons-715925 in network mk-addons-715925
	I0731 19:29:08.128695  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:08.128624  130125 retry.go:31] will retry after 459.247068ms: waiting for machine to come up
	I0731 19:29:08.589185  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:08.589684  130103 main.go:141] libmachine: (addons-715925) DBG | unable to find current IP address of domain addons-715925 in network mk-addons-715925
	I0731 19:29:08.589728  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:08.589665  130125 retry.go:31] will retry after 575.376406ms: waiting for machine to come up
	I0731 19:29:09.166332  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:09.166742  130103 main.go:141] libmachine: (addons-715925) DBG | unable to find current IP address of domain addons-715925 in network mk-addons-715925
	I0731 19:29:09.166768  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:09.166686  130125 retry.go:31] will retry after 965.991268ms: waiting for machine to come up
	I0731 19:29:10.134425  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:10.134903  130103 main.go:141] libmachine: (addons-715925) DBG | unable to find current IP address of domain addons-715925 in network mk-addons-715925
	I0731 19:29:10.134923  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:10.134872  130125 retry.go:31] will retry after 1.368485162s: waiting for machine to come up
	I0731 19:29:11.505444  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:11.505827  130103 main.go:141] libmachine: (addons-715925) DBG | unable to find current IP address of domain addons-715925 in network mk-addons-715925
	I0731 19:29:11.505849  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:11.505798  130125 retry.go:31] will retry after 1.510757371s: waiting for machine to come up
	I0731 19:29:13.018418  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:13.018855  130103 main.go:141] libmachine: (addons-715925) DBG | unable to find current IP address of domain addons-715925 in network mk-addons-715925
	I0731 19:29:13.018884  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:13.018781  130125 retry.go:31] will retry after 1.809878449s: waiting for machine to come up
	I0731 19:29:14.830581  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:14.831044  130103 main.go:141] libmachine: (addons-715925) DBG | unable to find current IP address of domain addons-715925 in network mk-addons-715925
	I0731 19:29:14.831074  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:14.830987  130125 retry.go:31] will retry after 2.137587319s: waiting for machine to come up
	I0731 19:29:16.971122  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:16.971484  130103 main.go:141] libmachine: (addons-715925) DBG | unable to find current IP address of domain addons-715925 in network mk-addons-715925
	I0731 19:29:16.971503  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:16.971446  130125 retry.go:31] will retry after 2.933911969s: waiting for machine to come up
	I0731 19:29:19.907193  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:19.907671  130103 main.go:141] libmachine: (addons-715925) DBG | unable to find current IP address of domain addons-715925 in network mk-addons-715925
	I0731 19:29:19.907699  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:19.907619  130125 retry.go:31] will retry after 3.252960875s: waiting for machine to come up
	I0731 19:29:23.163952  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:23.164444  130103 main.go:141] libmachine: (addons-715925) DBG | unable to find current IP address of domain addons-715925 in network mk-addons-715925
	I0731 19:29:23.164472  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:23.164334  130125 retry.go:31] will retry after 4.321243048s: waiting for machine to come up
	I0731 19:29:27.488876  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:27.489438  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has current primary IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:27.489460  130103 main.go:141] libmachine: (addons-715925) Found IP for machine: 192.168.39.147
	I0731 19:29:27.489473  130103 main.go:141] libmachine: (addons-715925) Reserving static IP address...
	I0731 19:29:27.490026  130103 main.go:141] libmachine: (addons-715925) DBG | unable to find host DHCP lease matching {name: "addons-715925", mac: "52:54:00:6d:64:ee", ip: "192.168.39.147"} in network mk-addons-715925
	I0731 19:29:27.561180  130103 main.go:141] libmachine: (addons-715925) DBG | Getting to WaitForSSH function...
	I0731 19:29:27.561215  130103 main.go:141] libmachine: (addons-715925) Reserved static IP address: 192.168.39.147
	I0731 19:29:27.561230  130103 main.go:141] libmachine: (addons-715925) Waiting for SSH to be available...
	I0731 19:29:27.563779  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:27.564311  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:27.564339  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:27.564506  130103 main.go:141] libmachine: (addons-715925) DBG | Using SSH client type: external
	I0731 19:29:27.564561  130103 main.go:141] libmachine: (addons-715925) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa (-rw-------)
	I0731 19:29:27.564599  130103 main.go:141] libmachine: (addons-715925) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.147 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 19:29:27.564612  130103 main.go:141] libmachine: (addons-715925) DBG | About to run SSH command:
	I0731 19:29:27.564633  130103 main.go:141] libmachine: (addons-715925) DBG | exit 0
	I0731 19:29:27.689829  130103 main.go:141] libmachine: (addons-715925) DBG | SSH cmd err, output: <nil>: 
	I0731 19:29:27.690058  130103 main.go:141] libmachine: (addons-715925) KVM machine creation complete!
	I0731 19:29:27.690402  130103 main.go:141] libmachine: (addons-715925) Calling .GetConfigRaw
	I0731 19:29:27.690966  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:27.691153  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:27.691310  130103 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 19:29:27.691325  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:27.692577  130103 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 19:29:27.692608  130103 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 19:29:27.692617  130103 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 19:29:27.692629  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:27.694881  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:27.695240  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:27.695268  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:27.695374  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:27.695552  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:27.695697  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:27.695805  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:27.695952  130103 main.go:141] libmachine: Using SSH client type: native
	I0731 19:29:27.696157  130103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0731 19:29:27.696167  130103 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 19:29:27.800622  130103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 19:29:27.800652  130103 main.go:141] libmachine: Detecting the provisioner...
	I0731 19:29:27.800671  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:27.803597  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:27.804002  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:27.804035  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:27.804227  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:27.804453  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:27.804633  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:27.804909  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:27.805092  130103 main.go:141] libmachine: Using SSH client type: native
	I0731 19:29:27.805262  130103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0731 19:29:27.805274  130103 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 19:29:27.910394  130103 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 19:29:27.910483  130103 main.go:141] libmachine: found compatible host: buildroot
	I0731 19:29:27.910490  130103 main.go:141] libmachine: Provisioning with buildroot...
	I0731 19:29:27.910498  130103 main.go:141] libmachine: (addons-715925) Calling .GetMachineName
	I0731 19:29:27.910759  130103 buildroot.go:166] provisioning hostname "addons-715925"
	I0731 19:29:27.910784  130103 main.go:141] libmachine: (addons-715925) Calling .GetMachineName
	I0731 19:29:27.910982  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:27.913314  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:27.913634  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:27.913662  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:27.913738  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:27.913911  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:27.914077  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:27.914230  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:27.914430  130103 main.go:141] libmachine: Using SSH client type: native
	I0731 19:29:27.914672  130103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0731 19:29:27.914689  130103 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-715925 && echo "addons-715925" | sudo tee /etc/hostname
	I0731 19:29:28.039498  130103 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-715925
	
	I0731 19:29:28.039526  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:28.042484  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.042888  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:28.042936  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.043163  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:28.043373  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:28.043556  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:28.043736  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:28.043956  130103 main.go:141] libmachine: Using SSH client type: native
	I0731 19:29:28.044166  130103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0731 19:29:28.044190  130103 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-715925' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-715925/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-715925' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 19:29:28.158317  130103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 19:29:28.158349  130103 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 19:29:28.158392  130103 buildroot.go:174] setting up certificates
	I0731 19:29:28.158407  130103 provision.go:84] configureAuth start
	I0731 19:29:28.158420  130103 main.go:141] libmachine: (addons-715925) Calling .GetMachineName
	I0731 19:29:28.158726  130103 main.go:141] libmachine: (addons-715925) Calling .GetIP
	I0731 19:29:28.161183  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.161550  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:28.161578  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.161728  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:28.163593  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.163973  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:28.163999  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.164114  130103 provision.go:143] copyHostCerts
	I0731 19:29:28.164207  130103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 19:29:28.164333  130103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 19:29:28.164395  130103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 19:29:28.164440  130103 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.addons-715925 san=[127.0.0.1 192.168.39.147 addons-715925 localhost minikube]
	I0731 19:29:28.330547  130103 provision.go:177] copyRemoteCerts
	I0731 19:29:28.330611  130103 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 19:29:28.330647  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:28.333106  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.333418  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:28.333453  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.333621  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:28.333811  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:28.333991  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:28.334098  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:28.415331  130103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 19:29:28.439433  130103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 19:29:28.462891  130103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 19:29:28.485694  130103 provision.go:87] duration metric: took 327.271478ms to configureAuth
	I0731 19:29:28.485725  130103 buildroot.go:189] setting minikube options for container-runtime
	I0731 19:29:28.485913  130103 config.go:182] Loaded profile config "addons-715925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:29:28.486007  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:28.488338  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.488692  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:28.488719  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.488875  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:28.489084  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:28.489268  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:28.489469  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:28.489644  130103 main.go:141] libmachine: Using SSH client type: native
	I0731 19:29:28.489806  130103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0731 19:29:28.489821  130103 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 19:29:28.746688  130103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 19:29:28.746712  130103 main.go:141] libmachine: Checking connection to Docker...
	I0731 19:29:28.746720  130103 main.go:141] libmachine: (addons-715925) Calling .GetURL
	I0731 19:29:28.747938  130103 main.go:141] libmachine: (addons-715925) DBG | Using libvirt version 6000000
	I0731 19:29:28.750067  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.750386  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:28.750405  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.750584  130103 main.go:141] libmachine: Docker is up and running!
	I0731 19:29:28.750601  130103 main.go:141] libmachine: Reticulating splines...
	I0731 19:29:28.750608  130103 client.go:171] duration metric: took 24.245744327s to LocalClient.Create
	I0731 19:29:28.750649  130103 start.go:167] duration metric: took 24.245847855s to libmachine.API.Create "addons-715925"
	I0731 19:29:28.750668  130103 start.go:293] postStartSetup for "addons-715925" (driver="kvm2")
	I0731 19:29:28.750680  130103 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 19:29:28.750697  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:28.750950  130103 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 19:29:28.750975  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:28.753095  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.753421  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:28.753447  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.753585  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:28.753766  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:28.753918  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:28.754026  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:28.835632  130103 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 19:29:28.840018  130103 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 19:29:28.840049  130103 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 19:29:28.840129  130103 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 19:29:28.840161  130103 start.go:296] duration metric: took 89.484265ms for postStartSetup
	I0731 19:29:28.840202  130103 main.go:141] libmachine: (addons-715925) Calling .GetConfigRaw
	I0731 19:29:28.840798  130103 main.go:141] libmachine: (addons-715925) Calling .GetIP
	I0731 19:29:28.843150  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.843459  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:28.843490  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.843690  130103 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/config.json ...
	I0731 19:29:28.843903  130103 start.go:128] duration metric: took 24.358018339s to createHost
	I0731 19:29:28.843932  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:28.846455  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.846779  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:28.846805  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.846924  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:28.847164  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:28.847378  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:28.847487  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:28.847777  130103 main.go:141] libmachine: Using SSH client type: native
	I0731 19:29:28.847922  130103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0731 19:29:28.847931  130103 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 19:29:28.950058  130103 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722454168.925862125
	
	I0731 19:29:28.950080  130103 fix.go:216] guest clock: 1722454168.925862125
	I0731 19:29:28.950087  130103 fix.go:229] Guest: 2024-07-31 19:29:28.925862125 +0000 UTC Remote: 2024-07-31 19:29:28.84391685 +0000 UTC m=+24.461945574 (delta=81.945275ms)
	I0731 19:29:28.950129  130103 fix.go:200] guest clock delta is within tolerance: 81.945275ms
	I0731 19:29:28.950138  130103 start.go:83] releasing machines lock for "addons-715925", held for 24.46435786s
	I0731 19:29:28.950158  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:28.950414  130103 main.go:141] libmachine: (addons-715925) Calling .GetIP
	I0731 19:29:28.952987  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.953321  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:28.953361  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.953501  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:28.953939  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:28.954133  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:28.954239  130103 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 19:29:28.954286  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:28.954363  130103 ssh_runner.go:195] Run: cat /version.json
	I0731 19:29:28.954391  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:28.956845  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.957097  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.957129  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:28.957149  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.957251  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:28.957476  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:28.957491  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:28.957529  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.957647  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:28.957718  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:28.957817  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:28.957930  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:28.958059  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:28.958261  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:29.058932  130103 ssh_runner.go:195] Run: systemctl --version
	I0731 19:29:29.065031  130103 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 19:29:29.222642  130103 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 19:29:29.228923  130103 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 19:29:29.228990  130103 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 19:29:29.244261  130103 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 19:29:29.244285  130103 start.go:495] detecting cgroup driver to use...
	I0731 19:29:29.244351  130103 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 19:29:29.259719  130103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 19:29:29.273041  130103 docker.go:217] disabling cri-docker service (if available) ...
	I0731 19:29:29.273093  130103 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 19:29:29.286060  130103 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 19:29:29.298958  130103 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 19:29:29.411567  130103 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 19:29:29.540344  130103 docker.go:233] disabling docker service ...
	I0731 19:29:29.540422  130103 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 19:29:29.555924  130103 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 19:29:29.568381  130103 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 19:29:29.700845  130103 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 19:29:29.819524  130103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 19:29:29.833860  130103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 19:29:29.852510  130103 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 19:29:29.852566  130103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:29:29.862467  130103 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 19:29:29.862541  130103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:29:29.872623  130103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:29:29.882709  130103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:29:29.892895  130103 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 19:29:29.903734  130103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:29:29.913681  130103 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:29:29.930998  130103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:29:29.941538  130103 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 19:29:29.950963  130103 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 19:29:29.951016  130103 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 19:29:29.963304  130103 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 19:29:29.972336  130103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:29:30.078706  130103 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 19:29:30.213190  130103 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 19:29:30.213303  130103 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 19:29:30.218293  130103 start.go:563] Will wait 60s for crictl version
	I0731 19:29:30.218367  130103 ssh_runner.go:195] Run: which crictl
	I0731 19:29:30.222123  130103 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 19:29:30.260938  130103 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 19:29:30.261047  130103 ssh_runner.go:195] Run: crio --version
	I0731 19:29:30.289684  130103 ssh_runner.go:195] Run: crio --version
	I0731 19:29:30.319330  130103 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 19:29:30.320491  130103 main.go:141] libmachine: (addons-715925) Calling .GetIP
	I0731 19:29:30.322838  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:30.323164  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:30.323192  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:30.323401  130103 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 19:29:30.327491  130103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 19:29:30.339852  130103 kubeadm.go:883] updating cluster {Name:addons-715925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-715925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 19:29:30.339962  130103 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 19:29:30.340007  130103 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 19:29:30.372050  130103 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 19:29:30.372118  130103 ssh_runner.go:195] Run: which lz4
	I0731 19:29:30.376128  130103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 19:29:30.380300  130103 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 19:29:30.380329  130103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 19:29:31.770664  130103 crio.go:462] duration metric: took 1.394563738s to copy over tarball
	I0731 19:29:31.770753  130103 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 19:29:34.066094  130103 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.2953118s)
	I0731 19:29:34.066130  130103 crio.go:469] duration metric: took 2.295432134s to extract the tarball
	I0731 19:29:34.066141  130103 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 19:29:34.109244  130103 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 19:29:34.150321  130103 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 19:29:34.150348  130103 cache_images.go:84] Images are preloaded, skipping loading
	I0731 19:29:34.150359  130103 kubeadm.go:934] updating node { 192.168.39.147 8443 v1.30.3 crio true true} ...
	I0731 19:29:34.150508  130103 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-715925 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-715925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 19:29:34.150595  130103 ssh_runner.go:195] Run: crio config
	I0731 19:29:34.197787  130103 cni.go:84] Creating CNI manager for ""
	I0731 19:29:34.197811  130103 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 19:29:34.197824  130103 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 19:29:34.197850  130103 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.147 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-715925 NodeName:addons-715925 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 19:29:34.198038  130103 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-715925"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 19:29:34.198117  130103 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 19:29:34.208277  130103 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 19:29:34.208339  130103 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 19:29:34.217756  130103 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0731 19:29:34.234609  130103 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 19:29:34.250799  130103 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0731 19:29:34.266713  130103 ssh_runner.go:195] Run: grep 192.168.39.147	control-plane.minikube.internal$ /etc/hosts
	I0731 19:29:34.270369  130103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.147	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 19:29:34.281847  130103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:29:34.410999  130103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 19:29:34.428980  130103 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925 for IP: 192.168.39.147
	I0731 19:29:34.429008  130103 certs.go:194] generating shared ca certs ...
	I0731 19:29:34.429031  130103 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:29:34.429206  130103 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 19:29:34.734405  130103 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt ...
	I0731 19:29:34.734432  130103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt: {Name:mk4d5f8eac5af4bed4fe496450a7ef33fb556296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:29:34.734604  130103 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key ...
	I0731 19:29:34.734616  130103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key: {Name:mk4606c3c07cf89342d6e10a5cac72aecafe6804 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:29:34.734685  130103 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 19:29:34.986862  130103 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt ...
	I0731 19:29:34.986892  130103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt: {Name:mk95e510e38e7df0f774b9947d241d17543c0a4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:29:34.987053  130103 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key ...
	I0731 19:29:34.987071  130103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key: {Name:mk85f3a38f86eed75b7fe062aaa793236334658d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:29:34.987137  130103 certs.go:256] generating profile certs ...
	I0731 19:29:34.987200  130103 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.key
	I0731 19:29:34.987214  130103 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt with IP's: []
	I0731 19:29:35.054304  130103 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt ...
	I0731 19:29:35.054338  130103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: {Name:mkc793b360bd473fa37e04348368bff9302c6c4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:29:35.054498  130103 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.key ...
	I0731 19:29:35.054510  130103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.key: {Name:mk18437f70299f073c6f602ddcfbfcda0594a73e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:29:35.054573  130103 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/apiserver.key.b1593293
	I0731 19:29:35.054592  130103 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/apiserver.crt.b1593293 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.147]
	I0731 19:29:35.169660  130103 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/apiserver.crt.b1593293 ...
	I0731 19:29:35.169698  130103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/apiserver.crt.b1593293: {Name:mk4ff9ac7cf283ced725033db8d542a71d850615 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:29:35.169888  130103 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/apiserver.key.b1593293 ...
	I0731 19:29:35.169908  130103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/apiserver.key.b1593293: {Name:mkbbf4be2e519f0905edc297fdbc4c8d4c1c482b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:29:35.170003  130103 certs.go:381] copying /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/apiserver.crt.b1593293 -> /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/apiserver.crt
	I0731 19:29:35.170120  130103 certs.go:385] copying /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/apiserver.key.b1593293 -> /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/apiserver.key
	I0731 19:29:35.170173  130103 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/proxy-client.key
	I0731 19:29:35.170190  130103 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/proxy-client.crt with IP's: []
	I0731 19:29:35.390027  130103 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/proxy-client.crt ...
	I0731 19:29:35.390059  130103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/proxy-client.crt: {Name:mka698132995fe1e592227c9d5a8ad9d6dcfae50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:29:35.390248  130103 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/proxy-client.key ...
	I0731 19:29:35.390265  130103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/proxy-client.key: {Name:mk4a7ee209fc8d27c2805c44e7ee824f61d0fcd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:29:35.390488  130103 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 19:29:35.390532  130103 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 19:29:35.390570  130103 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 19:29:35.390600  130103 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 19:29:35.391226  130103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 19:29:35.418432  130103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 19:29:35.442325  130103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 19:29:35.466460  130103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 19:29:35.490519  130103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0731 19:29:35.514064  130103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 19:29:35.537218  130103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 19:29:35.560272  130103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 19:29:35.583218  130103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 19:29:35.606436  130103 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 19:29:35.623095  130103 ssh_runner.go:195] Run: openssl version
	I0731 19:29:35.628879  130103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 19:29:35.639651  130103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:29:35.643904  130103 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:29:35.643945  130103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:29:35.649490  130103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 19:29:35.659602  130103 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 19:29:35.663531  130103 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 19:29:35.663605  130103 kubeadm.go:392] StartCluster: {Name:addons-715925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-715925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:29:35.663680  130103 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 19:29:35.663720  130103 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 19:29:35.699253  130103 cri.go:89] found id: ""
	I0731 19:29:35.699339  130103 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 19:29:35.709293  130103 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 19:29:35.718238  130103 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 19:29:35.727600  130103 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 19:29:35.727621  130103 kubeadm.go:157] found existing configuration files:
	
	I0731 19:29:35.727662  130103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 19:29:35.736344  130103 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 19:29:35.736399  130103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 19:29:35.745073  130103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 19:29:35.753551  130103 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 19:29:35.753596  130103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 19:29:35.762456  130103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 19:29:35.770654  130103 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 19:29:35.770703  130103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 19:29:35.779513  130103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 19:29:35.787827  130103 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 19:29:35.787889  130103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 19:29:35.796698  130103 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 19:29:35.992220  130103 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 19:29:45.872109  130103 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 19:29:45.872193  130103 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 19:29:45.872285  130103 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 19:29:45.872394  130103 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 19:29:45.872481  130103 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 19:29:45.872569  130103 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 19:29:45.874157  130103 out.go:204]   - Generating certificates and keys ...
	I0731 19:29:45.874261  130103 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 19:29:45.874359  130103 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 19:29:45.874459  130103 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 19:29:45.874533  130103 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 19:29:45.874632  130103 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 19:29:45.874716  130103 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 19:29:45.874767  130103 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 19:29:45.874879  130103 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-715925 localhost] and IPs [192.168.39.147 127.0.0.1 ::1]
	I0731 19:29:45.874952  130103 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 19:29:45.875097  130103 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-715925 localhost] and IPs [192.168.39.147 127.0.0.1 ::1]
	I0731 19:29:45.875188  130103 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 19:29:45.875282  130103 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 19:29:45.875346  130103 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 19:29:45.875425  130103 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 19:29:45.875469  130103 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 19:29:45.875517  130103 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 19:29:45.875562  130103 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 19:29:45.875634  130103 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 19:29:45.875687  130103 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 19:29:45.875752  130103 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 19:29:45.875805  130103 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 19:29:45.877372  130103 out.go:204]   - Booting up control plane ...
	I0731 19:29:45.877474  130103 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 19:29:45.877558  130103 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 19:29:45.877632  130103 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 19:29:45.877743  130103 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 19:29:45.877812  130103 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 19:29:45.877844  130103 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 19:29:45.877957  130103 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 19:29:45.878030  130103 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 19:29:45.878082  130103 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001184953s
	I0731 19:29:45.878160  130103 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 19:29:45.878242  130103 kubeadm.go:310] [api-check] The API server is healthy after 5.00232287s
	I0731 19:29:45.878383  130103 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 19:29:45.878508  130103 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 19:29:45.878565  130103 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 19:29:45.878785  130103 kubeadm.go:310] [mark-control-plane] Marking the node addons-715925 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 19:29:45.878876  130103 kubeadm.go:310] [bootstrap-token] Using token: ule4iw.fyjygud86o13jnep
	I0731 19:29:45.880371  130103 out.go:204]   - Configuring RBAC rules ...
	I0731 19:29:45.880503  130103 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 19:29:45.880602  130103 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 19:29:45.880737  130103 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 19:29:45.880850  130103 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 19:29:45.880949  130103 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 19:29:45.881058  130103 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 19:29:45.881196  130103 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 19:29:45.881259  130103 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 19:29:45.881322  130103 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 19:29:45.881331  130103 kubeadm.go:310] 
	I0731 19:29:45.881442  130103 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 19:29:45.881453  130103 kubeadm.go:310] 
	I0731 19:29:45.881513  130103 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 19:29:45.881519  130103 kubeadm.go:310] 
	I0731 19:29:45.881545  130103 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 19:29:45.881610  130103 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 19:29:45.881690  130103 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 19:29:45.881702  130103 kubeadm.go:310] 
	I0731 19:29:45.881778  130103 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 19:29:45.881787  130103 kubeadm.go:310] 
	I0731 19:29:45.881830  130103 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 19:29:45.881836  130103 kubeadm.go:310] 
	I0731 19:29:45.881879  130103 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 19:29:45.881944  130103 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 19:29:45.882010  130103 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 19:29:45.882019  130103 kubeadm.go:310] 
	I0731 19:29:45.882134  130103 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 19:29:45.882246  130103 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 19:29:45.882254  130103 kubeadm.go:310] 
	I0731 19:29:45.882340  130103 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ule4iw.fyjygud86o13jnep \
	I0731 19:29:45.882428  130103 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:227a67c5218b331dd5f7132ea28d97c9a8049ca33dc9adbb2077964e102f3559 \
	I0731 19:29:45.882459  130103 kubeadm.go:310] 	--control-plane 
	I0731 19:29:45.882474  130103 kubeadm.go:310] 
	I0731 19:29:45.882584  130103 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 19:29:45.882595  130103 kubeadm.go:310] 
	I0731 19:29:45.882662  130103 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ule4iw.fyjygud86o13jnep \
	I0731 19:29:45.882764  130103 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:227a67c5218b331dd5f7132ea28d97c9a8049ca33dc9adbb2077964e102f3559 
	I0731 19:29:45.882776  130103 cni.go:84] Creating CNI manager for ""
	I0731 19:29:45.882783  130103 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 19:29:45.884467  130103 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 19:29:45.885857  130103 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 19:29:45.897393  130103 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 19:29:45.919025  130103 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 19:29:45.919115  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:45.919116  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-715925 minikube.k8s.io/updated_at=2024_07_31T19_29_45_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825 minikube.k8s.io/name=addons-715925 minikube.k8s.io/primary=true
	I0731 19:29:46.045725  130103 ops.go:34] apiserver oom_adj: -16
	I0731 19:29:46.045909  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:46.546185  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:47.046871  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:47.546374  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:48.046035  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:48.546363  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:49.046031  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:49.546335  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:50.046265  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:50.546363  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:51.046970  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:51.546263  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:52.046309  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:52.546576  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:53.046479  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:53.546554  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:54.046653  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:54.546581  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:55.046796  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:55.546805  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:56.046765  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:56.546133  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:57.046263  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:57.546801  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:58.046171  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:58.546059  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:59.046786  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:59.132969  130103 kubeadm.go:1113] duration metric: took 13.213927952s to wait for elevateKubeSystemPrivileges
	I0731 19:29:59.133015  130103 kubeadm.go:394] duration metric: took 23.469414816s to StartCluster
	I0731 19:29:59.133041  130103 settings.go:142] acquiring lock: {Name:mk1f43113b3e416d694056e4890ac819b020378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:29:59.133177  130103 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 19:29:59.133682  130103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:29:59.133928  130103 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 19:29:59.133948  130103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 19:29:59.133988  130103 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0731 19:29:59.134114  130103 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-715925"
	I0731 19:29:59.134127  130103 addons.go:69] Setting default-storageclass=true in profile "addons-715925"
	I0731 19:29:59.134152  130103 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-715925"
	I0731 19:29:59.134148  130103 addons.go:69] Setting cloud-spanner=true in profile "addons-715925"
	I0731 19:29:59.134156  130103 addons.go:69] Setting metrics-server=true in profile "addons-715925"
	I0731 19:29:59.134165  130103 config.go:182] Loaded profile config "addons-715925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:29:59.134182  130103 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-715925"
	I0731 19:29:59.134195  130103 addons.go:69] Setting gcp-auth=true in profile "addons-715925"
	I0731 19:29:59.134196  130103 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-715925"
	I0731 19:29:59.134209  130103 addons.go:69] Setting ingress=true in profile "addons-715925"
	I0731 19:29:59.134214  130103 mustload.go:65] Loading cluster: addons-715925
	I0731 19:29:59.134220  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:29:59.134228  130103 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-715925"
	I0731 19:29:59.134232  130103 addons.go:69] Setting volumesnapshots=true in profile "addons-715925"
	I0731 19:29:59.134233  130103 addons.go:69] Setting helm-tiller=true in profile "addons-715925"
	I0731 19:29:59.134251  130103 addons.go:234] Setting addon helm-tiller=true in "addons-715925"
	I0731 19:29:59.134251  130103 addons.go:234] Setting addon volumesnapshots=true in "addons-715925"
	I0731 19:29:59.134251  130103 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-715925"
	I0731 19:29:59.134283  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:29:59.134285  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:29:59.134402  130103 config.go:182] Loaded profile config "addons-715925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:29:59.134665  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.134685  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.134715  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.134726  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.134186  130103 addons.go:234] Setting addon cloud-spanner=true in "addons-715925"
	I0731 19:29:59.134748  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.134765  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.134219  130103 addons.go:69] Setting volcano=true in profile "addons-715925"
	I0731 19:29:59.134788  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.134790  130103 addons.go:69] Setting ingress-dns=true in profile "addons-715925"
	I0731 19:29:59.134800  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.134798  130103 addons.go:69] Setting inspektor-gadget=true in profile "addons-715925"
	I0731 19:29:59.134815  130103 addons.go:234] Setting addon ingress-dns=true in "addons-715925"
	I0731 19:29:59.134824  130103 addons.go:234] Setting addon inspektor-gadget=true in "addons-715925"
	I0731 19:29:59.134673  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.134856  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:29:59.134861  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.134955  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.135005  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.134771  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:29:59.134175  130103 addons.go:69] Setting storage-provisioner=true in profile "addons-715925"
	I0731 19:29:59.135289  130103 addons.go:234] Setting addon storage-provisioner=true in "addons-715925"
	I0731 19:29:59.135325  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:29:59.135339  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.135366  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.135496  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.134116  130103 addons.go:69] Setting yakd=true in profile "addons-715925"
	I0731 19:29:59.135515  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.135540  130103 addons.go:234] Setting addon yakd=true in "addons-715925"
	I0731 19:29:59.135569  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:29:59.134224  130103 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-715925"
	I0731 19:29:59.135666  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.135671  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:29:59.135693  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.135923  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.135952  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.136026  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.136045  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.134186  130103 addons.go:234] Setting addon metrics-server=true in "addons-715925"
	I0731 19:29:59.136129  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:29:59.134227  130103 addons.go:234] Setting addon ingress=true in "addons-715925"
	I0731 19:29:59.136203  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:29:59.136535  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.136556  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.146340  130103 out.go:177] * Verifying Kubernetes components...
	I0731 19:29:59.134840  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:29:59.147024  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.147054  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.148274  130103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:29:59.134198  130103 addons.go:69] Setting registry=true in profile "addons-715925"
	I0731 19:29:59.148409  130103 addons.go:234] Setting addon registry=true in "addons-715925"
	I0731 19:29:59.148447  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:29:59.148822  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.148847  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.134839  130103 addons.go:234] Setting addon volcano=true in "addons-715925"
	I0731 19:29:59.149574  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:29:59.149915  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.149951  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.155536  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45079
	I0731 19:29:59.156111  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.156685  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.156708  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.157146  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.157711  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.157745  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.167057  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44013
	I0731 19:29:59.167633  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.168149  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.168171  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.168514  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.169076  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.169114  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.169575  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39017
	I0731 19:29:59.170075  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.170847  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.170865  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.171418  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.172001  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.172034  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.173273  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43621
	I0731 19:29:59.173437  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33607
	I0731 19:29:59.173449  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32835
	I0731 19:29:59.173515  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46811
	I0731 19:29:59.174874  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.174916  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.175124  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.175214  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.175296  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34429
	I0731 19:29:59.175393  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39741
	I0731 19:29:59.175464  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36825
	I0731 19:29:59.175517  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39913
	I0731 19:29:59.175647  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.175677  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.175690  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.176053  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.176111  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.176141  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.176174  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.176296  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.176312  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.176325  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.176580  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.176595  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.176746  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.176756  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.176814  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.176869  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.176910  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.176956  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.177467  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.177576  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.177592  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.177897  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.177938  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.178717  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.178740  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.180665  130103 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-715925"
	I0731 19:29:59.180706  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:29:59.181051  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.181083  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.181688  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.181724  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.181808  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.181905  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.181927  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.182043  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.182058  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.182362  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.182427  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.182476  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.182514  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.183191  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.183260  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.183314  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.183828  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.183857  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.184366  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.184403  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.187466  130103 addons.go:234] Setting addon default-storageclass=true in "addons-715925"
	I0731 19:29:59.187516  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:29:59.187859  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.187879  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.193685  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:29:59.194078  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.194117  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.194939  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46249
	I0731 19:29:59.195412  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.201980  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.202014  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.202755  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.202991  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.204766  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:59.207146  130103 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0731 19:29:59.208813  130103 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0731 19:29:59.210243  130103 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0731 19:29:59.211648  130103 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0731 19:29:59.212996  130103 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0731 19:29:59.213781  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40555
	I0731 19:29:59.213974  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35453
	I0731 19:29:59.214648  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.215318  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.215339  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.215586  130103 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0731 19:29:59.215774  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.215940  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.216504  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41283
	I0731 19:29:59.216589  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.216608  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.217028  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.217388  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.217414  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.217767  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.217791  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.218190  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.218370  130103 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0731 19:29:59.218576  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.221078  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:59.221482  130103 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0731 19:29:59.221897  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.222553  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.222614  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.222812  130103 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0731 19:29:59.222915  130103 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0731 19:29:59.222930  130103 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0731 19:29:59.222953  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:59.223629  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
	I0731 19:29:59.224054  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.224578  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.224597  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.224992  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.225518  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43645
	I0731 19:29:59.225610  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.225647  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.225925  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.226449  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.226476  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.226816  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.227080  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.227306  130103 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0731 19:29:59.227325  130103 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0731 19:29:59.227345  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:59.227352  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.227395  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.228212  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37475
	I0731 19:29:59.228699  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.228988  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33215
	I0731 19:29:59.229315  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35605
	I0731 19:29:59.229502  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.229670  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.229695  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.229693  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:59.229720  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.230097  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.230185  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.230194  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:59.230208  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.230364  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:59.230574  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.230594  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.230614  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:59.230817  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.230859  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:59.231061  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.231480  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37477
	I0731 19:29:59.231651  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.231677  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.231792  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.231969  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.232134  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.232311  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.233373  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.233394  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.233467  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39985
	I0731 19:29:59.233974  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.234053  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.234464  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.234476  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:59.234521  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.234535  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.234509  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.234871  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.234874  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:29:59.234912  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:29:59.235086  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.235139  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:29:59.235160  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:29:59.235188  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:29:59.235201  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:29:59.235209  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:29:59.235651  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:29:59.235657  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:29:59.235678  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	W0731 19:29:59.235781  130103 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0731 19:29:59.238001  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33051
	I0731 19:29:59.238371  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.238847  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.238868  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.239065  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:59.239129  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:59.239235  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.239611  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.239697  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36603
	I0731 19:29:59.240074  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.240552  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.240578  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.240703  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:59.240920  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.241387  130103 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0731 19:29:59.241465  130103 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0731 19:29:59.241500  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.241593  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.241539  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:59.242909  130103 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0731 19:29:59.243508  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0731 19:29:59.243527  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:59.243432  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35097
	I0731 19:29:59.243745  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46251
	I0731 19:29:59.244121  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.244198  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.244611  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.244628  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.244730  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.244743  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.244918  130103 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0731 19:29:59.244933  130103 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0731 19:29:59.244951  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:59.244996  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.244998  130103 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0731 19:29:59.245103  130103 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 19:29:59.245186  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.245834  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.245858  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.246260  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.247137  130103 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0731 19:29:59.247155  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0731 19:29:59.247176  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:59.247925  130103 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 19:29:59.247940  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 19:29:59.247956  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:59.249410  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:59.251454  130103 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0731 19:29:59.251639  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.251666  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.251685  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:59.251701  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.251797  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:59.251800  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36493
	I0731 19:29:59.252080  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:59.252278  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:59.252429  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:59.252617  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.252650  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:59.252665  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.252847  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:59.253048  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.253142  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:59.253281  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:59.253301  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.253326  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:59.253366  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.253409  130103 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0731 19:29:59.253424  130103 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0731 19:29:59.253443  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:59.253465  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:59.253665  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:59.253714  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:59.253967  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:59.253987  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:59.254005  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.254032  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:59.254096  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:59.254202  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:59.254241  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:59.254325  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.254600  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:59.254784  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:59.255124  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.255139  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.255306  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:59.255499  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:59.255701  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:59.255871  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:59.256108  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.256284  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.256567  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.257189  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:59.257324  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.257771  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:59.257988  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:59.258139  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:59.258324  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:59.258377  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:59.260418  130103 out.go:177]   - Using image docker.io/busybox:stable
	I0731 19:29:59.261814  130103 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0731 19:29:59.263227  130103 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0731 19:29:59.263251  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0731 19:29:59.263273  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:59.266646  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.267090  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:59.267111  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.267302  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:59.267498  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:59.267635  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:59.267774  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:59.268300  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40003
	I0731 19:29:59.268981  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.269740  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.269763  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.270410  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.270725  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.272585  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:59.272607  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35855
	I0731 19:29:59.273132  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.273255  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41839
	I0731 19:29:59.273606  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.273626  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.273740  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.274037  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.274289  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.274311  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.274318  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.274671  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.274741  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42403
	I0731 19:29:59.274820  130103 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0731 19:29:59.274890  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.275201  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.275677  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.275699  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.276075  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.276242  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:59.276871  130103 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0731 19:29:59.276890  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0731 19:29:59.276909  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:59.276993  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:59.277022  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:59.277199  130103 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 19:29:59.277210  130103 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 19:29:59.277225  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:59.278949  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42855
	I0731 19:29:59.279665  130103 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 19:29:59.279747  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.280342  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.280361  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.280799  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.280923  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.281251  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:59.281277  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.281373  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.281426  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.282352  130103 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 19:29:59.283702  130103 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0731 19:29:59.283971  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:59.283985  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43415
	I0731 19:29:59.283995  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:59.284010  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:59.284016  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.284039  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:59.284179  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:59.284260  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:59.284351  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:59.284378  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:59.284495  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:59.284495  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:59.285109  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.285512  130103 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0731 19:29:59.285747  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.285771  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.285868  130103 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0731 19:29:59.285888  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0731 19:29:59.285905  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:59.286214  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.286386  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.288012  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:59.288161  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45703
	I0731 19:29:59.288659  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.288820  130103 out.go:177]   - Using image docker.io/registry:2.8.3
	I0731 19:29:59.289107  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.289220  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.289680  130103 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0731 19:29:59.289692  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.289862  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.290299  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:59.290321  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.290362  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.290512  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:59.290688  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:59.290695  130103 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0731 19:29:59.290710  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0731 19:29:59.290733  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:59.290848  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:59.290949  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:59.291505  130103 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0731 19:29:59.291521  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0731 19:29:59.291537  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	W0731 19:29:59.292816  130103 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:59816->192.168.39.147:22: read: connection reset by peer
	I0731 19:29:59.292862  130103 retry.go:31] will retry after 189.807281ms: ssh: handshake failed: read tcp 192.168.39.1:59816->192.168.39.147:22: read: connection reset by peer
	I0731 19:29:59.292907  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:59.294515  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.294743  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.294787  130103 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0731 19:29:59.295033  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:59.295052  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.295088  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:59.295107  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.295277  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:59.295375  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:59.295482  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:59.295610  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:59.295642  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:59.295752  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:59.295765  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:59.295901  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:59.296099  130103 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 19:29:59.296113  130103 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 19:29:59.296124  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:59.299212  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.299655  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:59.299679  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.299854  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:59.300009  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:59.300136  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:59.300243  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:59.523765  130103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 19:29:59.523810  130103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 19:29:59.544137  130103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0731 19:29:59.560206  130103 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0731 19:29:59.560229  130103 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0731 19:29:59.589332  130103 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0731 19:29:59.589369  130103 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0731 19:29:59.645097  130103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 19:29:59.648217  130103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0731 19:29:59.650026  130103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 19:29:59.668901  130103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0731 19:29:59.677496  130103 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 19:29:59.677518  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0731 19:29:59.703544  130103 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0731 19:29:59.703569  130103 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0731 19:29:59.726007  130103 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0731 19:29:59.726033  130103 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0731 19:29:59.749272  130103 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0731 19:29:59.749309  130103 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0731 19:29:59.779413  130103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0731 19:29:59.789521  130103 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0731 19:29:59.789543  130103 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0731 19:29:59.800790  130103 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0731 19:29:59.800812  130103 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0731 19:29:59.805233  130103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0731 19:29:59.819018  130103 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0731 19:29:59.819040  130103 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0731 19:29:59.836362  130103 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0731 19:29:59.836382  130103 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0731 19:29:59.928997  130103 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 19:29:59.929021  130103 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 19:29:59.960439  130103 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0731 19:29:59.960467  130103 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0731 19:29:59.963759  130103 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0731 19:29:59.963780  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0731 19:29:59.988432  130103 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0731 19:29:59.988469  130103 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0731 19:30:00.023451  130103 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0731 19:30:00.023479  130103 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0731 19:30:00.117205  130103 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0731 19:30:00.117236  130103 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0731 19:30:00.121076  130103 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0731 19:30:00.121102  130103 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0731 19:30:00.140161  130103 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0731 19:30:00.140183  130103 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0731 19:30:00.249556  130103 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 19:30:00.249589  130103 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 19:30:00.270672  130103 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0731 19:30:00.270704  130103 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0731 19:30:00.299367  130103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0731 19:30:00.306796  130103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0731 19:30:00.352711  130103 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0731 19:30:00.352744  130103 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0731 19:30:00.371429  130103 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0731 19:30:00.371451  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0731 19:30:00.375932  130103 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0731 19:30:00.375948  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0731 19:30:00.430498  130103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 19:30:00.516745  130103 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0731 19:30:00.516774  130103 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0731 19:30:00.544205  130103 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0731 19:30:00.544232  130103 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0731 19:30:00.605046  130103 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0731 19:30:00.605079  130103 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0731 19:30:00.624781  130103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0731 19:30:00.704940  130103 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 19:30:00.704966  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0731 19:30:00.788107  130103 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0731 19:30:00.788131  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0731 19:30:01.007391  130103 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0731 19:30:01.007423  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0731 19:30:01.108677  130103 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0731 19:30:01.108705  130103 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0731 19:30:01.162346  130103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 19:30:01.406198  130103 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0731 19:30:01.406232  130103 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0731 19:30:01.510330  130103 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0731 19:30:01.510374  130103 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0731 19:30:01.610122  130103 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.086324195s)
	I0731 19:30:01.610181  130103 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.086349388s)
	I0731 19:30:01.610199  130103 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0731 19:30:01.611190  130103 node_ready.go:35] waiting up to 6m0s for node "addons-715925" to be "Ready" ...
	I0731 19:30:01.614483  130103 node_ready.go:49] node "addons-715925" has status "Ready":"True"
	I0731 19:30:01.614505  130103 node_ready.go:38] duration metric: took 3.288322ms for node "addons-715925" to be "Ready" ...
	I0731 19:30:01.614514  130103 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 19:30:01.621051  130103 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fzb4m" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:01.697412  130103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0731 19:30:01.825984  130103 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0731 19:30:01.826010  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0731 19:30:01.948227  130103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.404045582s)
	I0731 19:30:01.948285  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:01.948299  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:01.948686  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:01.948707  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:01.948717  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:01.948725  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:01.948733  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:01.948997  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:01.949010  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:02.114883  130103 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-715925" context rescaled to 1 replicas
	I0731 19:30:02.172617  130103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0731 19:30:03.690049  130103 pod_ready.go:102] pod "coredns-7db6d8ff4d-fzb4m" in "kube-system" namespace has status "Ready":"False"
	I0731 19:30:04.202506  130103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.557362637s)
	I0731 19:30:04.202564  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:04.202577  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:04.202570  130103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.554319225s)
	I0731 19:30:04.202596  130103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.552542661s)
	I0731 19:30:04.202612  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:04.202626  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:04.202636  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:04.202638  130103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.53370189s)
	I0731 19:30:04.202646  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:04.202671  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:04.202685  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:04.202915  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:04.202953  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:04.202962  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:04.202971  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:04.202978  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:04.203056  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:04.203073  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:04.203081  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:04.203089  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:04.203394  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:04.203412  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:04.203464  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:04.203473  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:04.203481  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:04.203488  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:04.203538  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:04.203546  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:04.203555  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:04.203562  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:04.204974  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:04.204999  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:04.205007  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:04.205007  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:04.205015  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:04.205020  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:04.205662  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:04.205683  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:04.205704  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:04.338869  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:04.338897  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:04.339275  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:04.339324  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:04.339343  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:05.724660  130103 pod_ready.go:102] pod "coredns-7db6d8ff4d-fzb4m" in "kube-system" namespace has status "Ready":"False"
	I0731 19:30:05.938838  130103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.15938541s)
	I0731 19:30:05.938896  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:05.938909  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:05.939274  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:05.939294  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:05.939307  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:05.939317  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:05.939581  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:05.939603  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:05.939621  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:06.078749  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:06.078773  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:06.079146  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:06.079164  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:06.079181  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:06.341992  130103 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0731 19:30:06.342047  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:30:06.345518  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:30:06.345956  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:30:06.345991  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:30:06.346152  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:30:06.346381  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:30:06.346580  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:30:06.346740  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:30:06.681729  130103 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0731 19:30:06.930192  130103 addons.go:234] Setting addon gcp-auth=true in "addons-715925"
	I0731 19:30:06.930246  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:30:06.930576  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:30:06.930613  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:30:06.946803  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37947
	I0731 19:30:06.947300  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:30:06.947835  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:30:06.947862  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:30:06.948287  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:30:06.948759  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:30:06.948792  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:30:06.964506  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41877
	I0731 19:30:06.965001  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:30:06.965609  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:30:06.965640  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:30:06.966003  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:30:06.966214  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:30:06.967889  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:30:06.968120  130103 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0731 19:30:06.968141  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:30:06.971089  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:30:06.971530  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:30:06.971560  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:30:06.971779  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:30:06.972016  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:30:06.972197  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:30:06.972354  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:30:07.309680  130103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.504408475s)
	I0731 19:30:07.309733  130103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.01032075s)
	I0731 19:30:07.309745  130103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.002919625s)
	I0731 19:30:07.309771  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:07.309792  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:07.309771  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:07.309855  130103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.87932118s)
	I0731 19:30:07.309886  130103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.685073069s)
	I0731 19:30:07.309862  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:07.309911  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:07.309923  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:07.309888  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:07.309953  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:07.309746  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:07.309978  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:07.310086  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:07.310096  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:07.310114  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:07.310121  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:07.310384  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:07.310397  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:07.310406  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:07.310413  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:07.310615  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:07.310637  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:07.310658  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:07.310658  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:07.310668  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:07.310677  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:07.310681  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:07.310685  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:07.310689  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:07.310698  130103 addons.go:475] Verifying addon ingress=true in "addons-715925"
	I0731 19:30:07.310905  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:07.310933  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:07.310939  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:07.310940  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:07.310951  130103 addons.go:475] Verifying addon metrics-server=true in "addons-715925"
	I0731 19:30:07.310966  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:07.310974  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:07.310981  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:07.310990  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:07.311048  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:07.311056  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:07.311063  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:07.311070  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:07.311335  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:07.311360  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:07.311368  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:07.312340  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:07.312390  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:07.312403  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:07.312393  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:07.312525  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:07.312536  130103 addons.go:475] Verifying addon registry=true in "addons-715925"
	I0731 19:30:07.313377  130103 out.go:177] * Verifying ingress addon...
	I0731 19:30:07.315323  130103 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-715925 service yakd-dashboard -n yakd-dashboard
	
	I0731 19:30:07.315330  130103 out.go:177] * Verifying registry addon...
	I0731 19:30:07.316056  130103 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0731 19:30:07.317597  130103 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0731 19:30:07.321048  130103 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0731 19:30:07.321071  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:07.329891  130103 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0731 19:30:07.329913  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:07.904199  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:07.907131  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:07.909078  130103 pod_ready.go:102] pod "coredns-7db6d8ff4d-fzb4m" in "kube-system" namespace has status "Ready":"False"
	I0731 19:30:08.180702  130103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.018291521s)
	W0731 19:30:08.180774  130103 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0731 19:30:08.180831  130103 retry.go:31] will retry after 254.101349ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0731 19:30:08.322901  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:08.325188  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:08.435734  130103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 19:30:08.839063  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:08.849994  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:09.097097  130103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.399621776s)
	I0731 19:30:09.097147  130103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.924485604s)
	I0731 19:30:09.097163  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:09.097179  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:09.097180  130103 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.129040033s)
	I0731 19:30:09.097194  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:09.097211  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:09.097508  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:09.097520  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:09.097576  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:09.097587  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:09.097582  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:09.097619  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:09.097630  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:09.097596  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:09.097683  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:09.097691  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:09.097875  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:09.097952  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:09.097964  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:09.098005  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:09.098017  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:09.098026  130103 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-715925"
	I0731 19:30:09.098949  130103 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 19:30:09.099884  130103 out.go:177] * Verifying csi-hostpath-driver addon...
	I0731 19:30:09.101447  130103 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0731 19:30:09.102184  130103 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0731 19:30:09.102822  130103 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0731 19:30:09.102836  130103 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0731 19:30:09.119899  130103 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0731 19:30:09.119925  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:09.168738  130103 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0731 19:30:09.168765  130103 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0731 19:30:09.232292  130103 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0731 19:30:09.232318  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0731 19:30:09.320721  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:09.323672  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:09.333002  130103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0731 19:30:09.610514  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:09.831394  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:09.843995  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:10.109235  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:10.132791  130103 pod_ready.go:92] pod "coredns-7db6d8ff4d-fzb4m" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:10.132815  130103 pod_ready.go:81] duration metric: took 8.511731828s for pod "coredns-7db6d8ff4d-fzb4m" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:10.132824  130103 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wm9kw" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:10.320822  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:10.324002  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:10.464239  130103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.028457595s)
	I0731 19:30:10.464294  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:10.464311  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:10.464659  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:10.464694  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:10.464707  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:10.464725  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:10.464734  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:10.464975  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:10.464991  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:10.653577  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:10.774948  130103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.441898325s)
	I0731 19:30:10.775019  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:10.775037  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:10.775335  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:10.775373  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:10.775378  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:10.775426  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:10.775441  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:10.775707  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:10.775728  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:10.775710  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:10.777007  130103 addons.go:475] Verifying addon gcp-auth=true in "addons-715925"
	I0731 19:30:10.778764  130103 out.go:177] * Verifying gcp-auth addon...
	I0731 19:30:10.780852  130103 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0731 19:30:10.794559  130103 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0731 19:30:10.794581  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:10.820773  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:10.844627  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:11.107541  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:11.285569  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:11.320927  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:11.323406  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:11.612247  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:11.784896  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:11.822434  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:11.824147  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:12.108582  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:12.139316  130103 pod_ready.go:102] pod "coredns-7db6d8ff4d-wm9kw" in "kube-system" namespace has status "Ready":"False"
	I0731 19:30:12.285203  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:12.322100  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:12.322509  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:12.608999  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:12.785444  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:12.820479  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:12.823103  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:13.108650  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:13.285097  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:13.321835  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:13.323511  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:13.609142  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:13.785225  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:13.821962  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:13.825212  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:14.110490  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:14.139607  130103 pod_ready.go:102] pod "coredns-7db6d8ff4d-wm9kw" in "kube-system" namespace has status "Ready":"False"
	I0731 19:30:14.285537  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:14.320573  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:14.324873  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:14.608134  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:14.784448  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:14.822307  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:14.823425  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:15.108344  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:15.288727  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:15.323142  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:15.327388  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:15.608711  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:15.938457  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:15.939006  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:15.942476  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:16.107575  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:16.140071  130103 pod_ready.go:102] pod "coredns-7db6d8ff4d-wm9kw" in "kube-system" namespace has status "Ready":"False"
	I0731 19:30:16.284420  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:16.333828  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:16.336572  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:16.608339  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:16.639384  130103 pod_ready.go:97] pod "coredns-7db6d8ff4d-wm9kw" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-31 19:30:16 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-31 19:29:58 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-31 19:29:58 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-31 19:29:58 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-31 19:29:58 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.147 HostIPs:[{IP:192.168.39
.147}] PodIP: PodIPs:[] StartTime:2024-07-31 19:29:58 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-07-31 19:30:03 +0000 UTC,FinishedAt:2024-07-31 19:30:14 +0000 UTC,ContainerID:cri-o://f657bab2874e87ea97fccfa5dbe80ab18fdf1d8024fdbea331cdbecc5eecbaaa,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://f657bab2874e87ea97fccfa5dbe80ab18fdf1d8024fdbea331cdbecc5eecbaaa Started:0xc001eb6690 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0731 19:30:16.639414  130103 pod_ready.go:81] duration metric: took 6.506584187s for pod "coredns-7db6d8ff4d-wm9kw" in "kube-system" namespace to be "Ready" ...
	E0731 19:30:16.639426  130103 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-wm9kw" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-31 19:30:16 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-31 19:29:58 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-31 19:29:58 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-31 19:29:58 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-31 19:29:58 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.147 HostIPs:[{IP:192.168.39.147}] PodIP: PodIPs:[] StartTime:2024-07-31 19:29:58 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-07-31 19:30:03 +0000 UTC,FinishedAt:2024-07-31 19:30:14 +0000 UTC,ContainerID:cri-o://f657bab2874e87ea97fccfa5dbe80ab18fdf1d8024fdbea331cdbecc5eecbaaa,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://f657bab2874e87ea97fccfa5dbe80ab18fdf1d8024fdbea331cdbecc5eecbaaa Started:0xc001eb6690 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0731 19:30:16.639435  130103 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-715925" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:16.644465  130103 pod_ready.go:92] pod "etcd-addons-715925" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:16.644484  130103 pod_ready.go:81] duration metric: took 5.041381ms for pod "etcd-addons-715925" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:16.644492  130103 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-715925" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:16.656259  130103 pod_ready.go:92] pod "kube-apiserver-addons-715925" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:16.656294  130103 pod_ready.go:81] duration metric: took 11.791708ms for pod "kube-apiserver-addons-715925" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:16.656309  130103 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-715925" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:16.663955  130103 pod_ready.go:92] pod "kube-controller-manager-addons-715925" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:16.663978  130103 pod_ready.go:81] duration metric: took 7.66022ms for pod "kube-controller-manager-addons-715925" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:16.663991  130103 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tfzvz" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:16.670766  130103 pod_ready.go:92] pod "kube-proxy-tfzvz" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:16.670797  130103 pod_ready.go:81] duration metric: took 6.797853ms for pod "kube-proxy-tfzvz" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:16.670809  130103 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-715925" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:16.784850  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:16.821364  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:16.825545  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:17.037796  130103 pod_ready.go:92] pod "kube-scheduler-addons-715925" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:17.037825  130103 pod_ready.go:81] duration metric: took 367.007684ms for pod "kube-scheduler-addons-715925" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:17.037835  130103 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-2p88n" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:17.107386  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:17.284140  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:17.324480  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:17.326449  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:17.608348  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:17.785030  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:17.821031  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:17.822240  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:18.109050  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:18.286464  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:18.320140  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:18.324065  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:18.607607  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:18.784799  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:18.821885  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:18.822639  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:19.060626  130103 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2p88n" in "kube-system" namespace has status "Ready":"False"
	I0731 19:30:19.108046  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:19.285317  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:19.320629  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:19.322255  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:19.608430  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:19.786083  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:19.821142  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:19.823036  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:20.107792  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:20.284717  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:20.320295  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:20.321777  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:20.608954  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:20.785228  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:20.821972  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:20.829672  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:21.108719  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:21.287333  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:21.319838  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:21.323034  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:21.543633  130103 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2p88n" in "kube-system" namespace has status "Ready":"False"
	I0731 19:30:21.608221  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:21.784581  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:21.822616  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:21.823526  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:22.108156  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:22.285278  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:22.320972  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:22.322913  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:22.619640  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:22.783897  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:22.820596  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:22.822881  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:23.107464  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:23.284684  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:23.320486  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:23.322403  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:23.543677  130103 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2p88n" in "kube-system" namespace has status "Ready":"False"
	I0731 19:30:23.611258  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:23.785016  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:23.821277  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:23.824185  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:24.111987  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:24.284483  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:24.320595  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:24.323483  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:24.608319  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:24.785308  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:24.822549  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:24.823079  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:25.108362  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:25.285081  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:25.320945  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:25.321872  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:25.546465  130103 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2p88n" in "kube-system" namespace has status "Ready":"False"
	I0731 19:30:25.607131  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:25.785199  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:25.821306  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:25.822756  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:26.107765  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:26.634799  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:26.635500  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:26.635730  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:26.635853  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:26.784401  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:26.820815  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:26.823711  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:27.107799  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:27.285451  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:27.320811  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:27.322907  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:27.608647  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:27.785012  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:27.820858  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:27.822307  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:28.044774  130103 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-2p88n" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:28.044795  130103 pod_ready.go:81] duration metric: took 11.006953747s for pod "nvidia-device-plugin-daemonset-2p88n" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:28.044803  130103 pod_ready.go:38] duration metric: took 26.430278638s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 19:30:28.044820  130103 api_server.go:52] waiting for apiserver process to appear ...
	I0731 19:30:28.044872  130103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:30:28.064085  130103 api_server.go:72] duration metric: took 28.930117985s to wait for apiserver process to appear ...
	I0731 19:30:28.064118  130103 api_server.go:88] waiting for apiserver healthz status ...
	I0731 19:30:28.064161  130103 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0731 19:30:28.069566  130103 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I0731 19:30:28.070493  130103 api_server.go:141] control plane version: v1.30.3
	I0731 19:30:28.070510  130103 api_server.go:131] duration metric: took 6.384944ms to wait for apiserver health ...
	I0731 19:30:28.070518  130103 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 19:30:28.082601  130103 system_pods.go:59] 18 kube-system pods found
	I0731 19:30:28.082651  130103 system_pods.go:61] "coredns-7db6d8ff4d-fzb4m" [43b53489-b06e-4cb4-9515-be6b4e7f5588] Running
	I0731 19:30:28.082663  130103 system_pods.go:61] "csi-hostpath-attacher-0" [139d55af-90b3-45b0-92dc-f37933d17669] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0731 19:30:28.082673  130103 system_pods.go:61] "csi-hostpath-resizer-0" [f4f165ba-2937-41b7-9dac-e9a67ff22feb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0731 19:30:28.082684  130103 system_pods.go:61] "csi-hostpathplugin-4j5wp" [cd5f8368-bef5-476f-ab47-b7c63c2ec4f7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0731 19:30:28.082691  130103 system_pods.go:61] "etcd-addons-715925" [c0548359-7576-4e62-9bfb-3402be548366] Running
	I0731 19:30:28.082697  130103 system_pods.go:61] "kube-apiserver-addons-715925" [4d5bab31-ad1d-4a7b-bab2-e3b6ada76520] Running
	I0731 19:30:28.082702  130103 system_pods.go:61] "kube-controller-manager-addons-715925" [c6016a60-c185-493a-9390-d012bf650d44] Running
	I0731 19:30:28.082709  130103 system_pods.go:61] "kube-ingress-dns-minikube" [bbc90c8c-9f3d-43fa-bd6d-1bbfc26c8397] Running
	I0731 19:30:28.082713  130103 system_pods.go:61] "kube-proxy-tfzvz" [6f30c198-5a23-42cb-8a8a-3e81ac3dce14] Running
	I0731 19:30:28.082718  130103 system_pods.go:61] "kube-scheduler-addons-715925" [7a801eb1-d479-4df9-ad7e-be2807f32007] Running
	I0731 19:30:28.082726  130103 system_pods.go:61] "metrics-server-c59844bb4-s4tts" [16f96003-84b9-4f23-a5c6-b1f5047bf0f7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 19:30:28.082735  130103 system_pods.go:61] "nvidia-device-plugin-daemonset-2p88n" [8b668c12-5647-4aa6-b190-d9e2e127ea94] Running
	I0731 19:30:28.082743  130103 system_pods.go:61] "registry-698f998955-x87x7" [2a48b934-362f-4a2d-b591-308e178c9f76] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0731 19:30:28.082752  130103 system_pods.go:61] "registry-proxy-2j7k4" [2550e10a-7f6c-463d-a4b7-da2406bd5137] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0731 19:30:28.082765  130103 system_pods.go:61] "snapshot-controller-745499f584-9n7kz" [28c39ab0-f8ef-4a21-900f-a53ede22dced] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0731 19:30:28.082780  130103 system_pods.go:61] "snapshot-controller-745499f584-nlmlq" [82bcba7d-98ac-4401-8b9a-aa6a93bdc494] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0731 19:30:28.082786  130103 system_pods.go:61] "storage-provisioner" [126127c5-8cd2-4f4e-8f76-e3bc2eb6eca3] Running
	I0731 19:30:28.082795  130103 system_pods.go:61] "tiller-deploy-6677d64bcd-9f7w2" [451aed79-261a-45ab-aa7c-e595c0dd9688] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0731 19:30:28.082807  130103 system_pods.go:74] duration metric: took 12.282368ms to wait for pod list to return data ...
	I0731 19:30:28.082818  130103 default_sa.go:34] waiting for default service account to be created ...
	I0731 19:30:28.094245  130103 default_sa.go:45] found service account: "default"
	I0731 19:30:28.094269  130103 default_sa.go:55] duration metric: took 11.441558ms for default service account to be created ...
	I0731 19:30:28.094278  130103 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 19:30:28.106776  130103 system_pods.go:86] 18 kube-system pods found
	I0731 19:30:28.106803  130103 system_pods.go:89] "coredns-7db6d8ff4d-fzb4m" [43b53489-b06e-4cb4-9515-be6b4e7f5588] Running
	I0731 19:30:28.106810  130103 system_pods.go:89] "csi-hostpath-attacher-0" [139d55af-90b3-45b0-92dc-f37933d17669] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0731 19:30:28.106816  130103 system_pods.go:89] "csi-hostpath-resizer-0" [f4f165ba-2937-41b7-9dac-e9a67ff22feb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0731 19:30:28.106823  130103 system_pods.go:89] "csi-hostpathplugin-4j5wp" [cd5f8368-bef5-476f-ab47-b7c63c2ec4f7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0731 19:30:28.106830  130103 system_pods.go:89] "etcd-addons-715925" [c0548359-7576-4e62-9bfb-3402be548366] Running
	I0731 19:30:28.106837  130103 system_pods.go:89] "kube-apiserver-addons-715925" [4d5bab31-ad1d-4a7b-bab2-e3b6ada76520] Running
	I0731 19:30:28.106843  130103 system_pods.go:89] "kube-controller-manager-addons-715925" [c6016a60-c185-493a-9390-d012bf650d44] Running
	I0731 19:30:28.106849  130103 system_pods.go:89] "kube-ingress-dns-minikube" [bbc90c8c-9f3d-43fa-bd6d-1bbfc26c8397] Running
	I0731 19:30:28.106858  130103 system_pods.go:89] "kube-proxy-tfzvz" [6f30c198-5a23-42cb-8a8a-3e81ac3dce14] Running
	I0731 19:30:28.106864  130103 system_pods.go:89] "kube-scheduler-addons-715925" [7a801eb1-d479-4df9-ad7e-be2807f32007] Running
	I0731 19:30:28.106870  130103 system_pods.go:89] "metrics-server-c59844bb4-s4tts" [16f96003-84b9-4f23-a5c6-b1f5047bf0f7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 19:30:28.106878  130103 system_pods.go:89] "nvidia-device-plugin-daemonset-2p88n" [8b668c12-5647-4aa6-b190-d9e2e127ea94] Running
	I0731 19:30:28.106885  130103 system_pods.go:89] "registry-698f998955-x87x7" [2a48b934-362f-4a2d-b591-308e178c9f76] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0731 19:30:28.106897  130103 system_pods.go:89] "registry-proxy-2j7k4" [2550e10a-7f6c-463d-a4b7-da2406bd5137] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0731 19:30:28.106911  130103 system_pods.go:89] "snapshot-controller-745499f584-9n7kz" [28c39ab0-f8ef-4a21-900f-a53ede22dced] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0731 19:30:28.106922  130103 system_pods.go:89] "snapshot-controller-745499f584-nlmlq" [82bcba7d-98ac-4401-8b9a-aa6a93bdc494] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0731 19:30:28.106928  130103 system_pods.go:89] "storage-provisioner" [126127c5-8cd2-4f4e-8f76-e3bc2eb6eca3] Running
	I0731 19:30:28.106935  130103 system_pods.go:89] "tiller-deploy-6677d64bcd-9f7w2" [451aed79-261a-45ab-aa7c-e595c0dd9688] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0731 19:30:28.106942  130103 system_pods.go:126] duration metric: took 12.658446ms to wait for k8s-apps to be running ...
	I0731 19:30:28.106951  130103 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 19:30:28.106996  130103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:30:28.111242  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:28.123392  130103 system_svc.go:56] duration metric: took 16.431246ms WaitForService to wait for kubelet
	I0731 19:30:28.123422  130103 kubeadm.go:582] duration metric: took 28.98946126s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 19:30:28.123457  130103 node_conditions.go:102] verifying NodePressure condition ...
	I0731 19:30:28.128170  130103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 19:30:28.128204  130103 node_conditions.go:123] node cpu capacity is 2
	I0731 19:30:28.128219  130103 node_conditions.go:105] duration metric: took 4.75563ms to run NodePressure ...
	I0731 19:30:28.128234  130103 start.go:241] waiting for startup goroutines ...
	I0731 19:30:28.284867  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:28.321764  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:28.322152  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:28.608183  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:28.784686  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:28.820893  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:28.824617  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:29.106969  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:29.284750  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:29.320508  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:29.323565  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:29.608525  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:29.787318  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:29.822141  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:29.823751  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:30.107709  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:30.285108  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:30.321085  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:30.322192  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:30.608096  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:30.784597  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:30.820636  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:30.823929  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:31.108670  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:31.286877  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:31.322182  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:31.322670  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:31.607800  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:31.784678  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:31.820261  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:31.823417  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:32.108655  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:32.284858  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:32.321229  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:32.322407  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:32.608100  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:32.785866  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:32.820641  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:32.822049  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:33.546515  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:33.546591  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:33.550853  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:33.552588  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:33.608327  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:33.784891  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:33.822382  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:33.822949  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:34.108104  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:34.284665  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:34.320175  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:34.322593  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:34.610157  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:34.785070  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:34.834576  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:34.836095  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:35.108884  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:35.284584  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:35.320915  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:35.322928  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:35.608081  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:35.784287  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:35.821501  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:35.823517  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:36.109075  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:36.284829  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:36.320874  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:36.323358  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:36.607804  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:36.784376  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:36.820424  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:36.826230  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:37.108392  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:37.284956  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:37.320777  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:37.323424  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:37.607162  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:37.784421  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:37.819903  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:37.823411  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:38.111872  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:38.284897  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:38.321232  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:38.323522  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:38.609135  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:38.785516  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:38.821427  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:38.823187  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:39.108032  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:39.287315  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:39.321686  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:39.324049  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:39.608542  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:39.784899  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:39.820560  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:39.823068  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:40.107570  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:40.284786  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:40.320480  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:40.324633  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:40.614528  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:40.785169  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:40.821036  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:40.823254  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:41.109091  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:41.285068  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:41.322654  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:41.324005  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:41.608771  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:41.784417  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:41.820106  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:41.822337  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:42.108970  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:42.284505  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:42.320153  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:42.322256  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:42.610721  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:42.785207  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:42.821245  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:42.822795  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:43.107825  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:43.284694  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:43.321730  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:43.322427  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:43.613134  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:43.785104  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:43.824767  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:43.824803  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:44.110325  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:44.284715  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:44.320671  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:44.322673  130103 kapi.go:107] duration metric: took 37.005073658s to wait for kubernetes.io/minikube-addons=registry ...
	I0731 19:30:44.607680  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:44.784513  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:44.820404  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:45.119350  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:45.284909  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:45.321225  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:45.618067  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:45.784886  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:45.820681  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:46.109401  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:46.285776  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:46.363403  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:46.608900  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:46.785157  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:46.821720  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:47.108063  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:47.285019  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:47.320488  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:47.608933  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:47.785019  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:47.821203  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:48.108697  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:48.284785  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:48.320772  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:48.607788  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:48.785366  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:48.821268  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:49.108906  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:49.284111  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:49.320815  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:49.608474  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:49.784911  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:49.820668  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:50.107721  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:50.284715  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:50.320908  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:50.609905  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:50.784609  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:50.821912  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:51.107817  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:51.285157  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:51.321509  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:51.611084  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:52.013155  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:52.013952  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:52.109613  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:52.286015  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:52.322008  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:52.608132  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:52.784878  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:52.820495  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:53.114581  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:53.285484  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:53.321414  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:53.612479  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:53.785067  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:53.821766  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:54.108249  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:54.285087  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:54.320567  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:54.609647  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:54.784128  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:54.821013  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:55.107933  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:55.284469  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:55.320401  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:55.608365  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:55.784564  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:55.821376  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:56.108353  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:56.285192  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:56.321502  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:56.614584  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:56.793996  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:56.821850  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:57.138550  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:57.285886  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:57.321772  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:57.608126  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:57.784387  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:57.820041  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:58.111851  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:58.284357  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:58.321859  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:58.608776  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:58.785430  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:58.821268  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:59.107771  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:59.285822  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:59.321160  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:59.609107  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:59.784072  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:59.820612  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:00.109055  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:00.284901  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:00.320790  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:00.607949  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:00.784948  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:00.820802  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:01.108071  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:01.285098  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:01.321272  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:01.608388  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:01.785563  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:01.820188  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:02.110479  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:02.285134  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:02.321491  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:02.608651  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:02.784127  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:02.820914  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:03.107038  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:03.284773  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:03.320564  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:03.608628  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:04.157208  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:04.159944  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:04.162399  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:04.283937  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:04.320684  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:04.609148  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:04.784274  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:04.821221  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:05.107435  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:05.285083  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:05.320849  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:05.610064  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:05.785415  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:05.820464  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:06.108827  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:06.284511  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:06.319993  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:06.608086  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:06.785445  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:06.820441  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:07.108395  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:07.284902  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:07.321166  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:07.609429  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:07.785787  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:07.821505  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:08.107937  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:08.284722  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:08.320315  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:08.610636  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:08.784636  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:08.820936  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:09.107912  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:09.284234  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:09.320780  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:09.608358  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:09.784764  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:09.820580  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:10.108876  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:10.284708  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:10.321283  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:10.610522  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:10.784456  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:10.820684  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:11.107598  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:11.284774  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:11.324569  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:11.608315  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:11.784675  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:11.820071  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:12.107752  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:12.284758  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:12.320564  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:12.608544  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:12.785846  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:12.820802  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:13.107465  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:13.285182  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:13.322097  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:13.607876  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:13.784927  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:13.821643  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:14.108956  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:14.595832  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:14.598740  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:14.618431  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:14.785308  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:14.821234  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:15.108737  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:15.286112  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:15.324326  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:15.613274  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:15.784301  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:15.822120  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:16.109744  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:16.284976  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:16.320676  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:16.607781  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:16.785064  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:16.821036  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:17.107801  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:17.285151  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:17.324635  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:17.609216  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:17.784471  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:17.823014  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:18.107759  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:18.291268  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:18.324422  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:18.612428  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:18.785235  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:18.821523  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:19.109797  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:19.284162  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:19.334293  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:19.623490  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:19.786273  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:19.822155  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:20.107959  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:20.286896  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:20.321530  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:20.611566  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:20.783913  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:20.820822  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:21.108926  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:21.284224  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:21.321091  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:21.608172  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:21.785167  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:21.821414  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:22.108064  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:22.286795  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:22.320902  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:22.612951  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:22.785268  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:22.821009  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:23.112040  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:23.285554  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:23.320414  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:23.608217  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:23.784581  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:23.820660  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:24.107902  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:24.284946  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:24.320403  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:24.608054  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:24.785183  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:24.823197  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:25.269125  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:25.285597  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:25.321204  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:25.608196  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:25.784910  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:25.820647  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:26.107406  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:26.284723  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:26.320992  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:26.607632  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:26.786002  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:26.821179  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:27.107430  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:27.285111  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:27.321715  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:27.616803  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:27.784674  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:27.820412  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:28.109490  130103 kapi.go:107] duration metric: took 1m19.007303321s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0731 19:31:28.285067  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:28.320989  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:28.785997  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:28.821755  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:29.284682  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:29.321013  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:29.785056  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:29.821387  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:30.285078  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:30.320956  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:30.785368  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:30.820301  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:31.285121  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:31.321173  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:31.784206  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:31.821360  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:32.284904  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:32.320822  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:32.785642  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:32.820836  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:33.284383  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:33.321501  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:33.784707  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:33.820709  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:34.284977  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:34.321267  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:34.784048  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:34.821823  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:35.287149  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:35.324383  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:35.784451  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:35.821650  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:36.285313  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:36.325279  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:36.784057  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:36.821063  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:37.285326  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:37.321795  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:37.784169  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:37.820906  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:38.285137  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:38.321253  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:38.784221  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:38.821362  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:39.284604  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:39.321675  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:39.784755  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:39.820651  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:40.284872  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:40.320489  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:40.784446  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:40.820932  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:41.285066  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:41.322357  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:41.786164  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:41.821649  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:42.284496  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:42.320461  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:42.784535  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:42.820105  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:43.286457  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:43.321350  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:43.784987  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:43.821223  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:44.285154  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:44.320858  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:44.785369  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:44.820742  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:45.285326  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:45.323659  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:45.784504  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:45.820692  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:46.285644  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:46.322309  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:46.784823  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:46.820583  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:47.285382  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:47.322015  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:47.786911  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:47.820986  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:48.284940  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:48.321285  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:48.784789  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:48.821018  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:49.284719  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:49.321466  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:49.785464  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:49.821286  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:50.284942  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:50.321044  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:50.785667  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:50.820660  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:51.284989  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:51.321242  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:51.788417  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:51.820925  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:52.285041  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:52.321207  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:52.784226  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:52.821155  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:53.285102  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:53.321977  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:53.784472  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:53.820693  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:54.284635  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:54.320881  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:54.784539  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:54.820274  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:55.284591  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:55.320495  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:55.784950  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:55.821193  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:56.284074  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:56.320946  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:56.784968  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:56.821112  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:57.284255  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:57.321609  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:57.784898  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:57.825301  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:58.284851  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:58.321233  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:58.784430  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:58.820265  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:59.285251  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:59.321487  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:59.785405  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:59.820461  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:00.284852  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:00.321013  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:00.785272  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:00.822654  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:01.284432  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:01.321722  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:01.784721  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:01.820851  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:02.284654  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:02.320638  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:02.784555  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:02.820711  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:03.284812  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:03.321597  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:03.784674  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:03.820858  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:04.285029  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:04.320935  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:04.785934  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:04.821558  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:05.288813  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:05.325296  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:05.785254  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:05.821289  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:06.285214  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:06.320802  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:06.785048  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:06.820866  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:07.285046  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:07.322128  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:07.786575  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:07.820623  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:08.284745  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:08.320768  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:08.787741  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:08.821222  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:09.291709  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:09.321236  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:09.785566  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:09.820950  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:10.285134  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:10.321305  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:10.784338  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:10.821488  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:11.284321  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:11.322052  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:11.785321  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:11.821886  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:12.284417  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:12.320943  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:12.785162  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:12.821228  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:13.285175  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:13.321527  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:13.785190  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:13.821859  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:14.284062  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:14.322435  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:14.784410  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:14.820665  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:15.284593  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:15.320904  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:15.785564  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:15.820755  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:16.284857  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:16.320942  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:16.785362  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:16.820312  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:17.284516  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:17.322101  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:17.785047  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:17.821050  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:18.285396  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:18.320814  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:18.784435  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:18.820978  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:19.284468  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:19.321475  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:19.784460  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:19.820320  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:20.284834  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:20.321110  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:20.784075  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:20.823117  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:21.285547  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:21.322063  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:21.786145  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:21.821328  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:22.284115  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:22.321166  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:22.784896  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:22.820814  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:23.285158  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:23.321420  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:23.784975  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:23.821241  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:24.284917  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:24.321634  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:24.784597  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:24.820382  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:25.284323  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:25.321739  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:25.784827  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:25.820882  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:26.285175  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:26.321401  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:26.784247  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:26.820862  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:27.285096  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:27.322787  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:27.785153  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:27.821346  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:28.285353  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:28.322187  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:29.065777  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:29.067604  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:29.285918  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:29.321555  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:29.784295  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:29.822643  130103 kapi.go:107] duration metric: took 2m22.506580424s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0731 19:32:30.285376  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:30.785187  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:31.285535  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:31.785442  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:32.285037  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:32.785236  130103 kapi.go:107] duration metric: took 2m22.004379575s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0731 19:32:32.787096  130103 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-715925 cluster.
	I0731 19:32:32.788383  130103 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0731 19:32:32.789514  130103 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0731 19:32:32.790704  130103 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, cloud-spanner, storage-provisioner, default-storageclass, storage-provisioner-rancher, metrics-server, helm-tiller, yakd, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0731 19:32:32.791984  130103 addons.go:510] duration metric: took 2m33.657998011s for enable addons: enabled=[ingress-dns nvidia-device-plugin cloud-spanner storage-provisioner default-storageclass storage-provisioner-rancher metrics-server helm-tiller yakd inspektor-gadget volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0731 19:32:32.792032  130103 start.go:246] waiting for cluster config update ...
	I0731 19:32:32.792056  130103 start.go:255] writing updated cluster config ...
	I0731 19:32:32.792335  130103 ssh_runner.go:195] Run: rm -f paused
	I0731 19:32:32.847217  130103 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 19:32:32.849149  130103 out.go:177] * Done! kubectl is now configured to use "addons-715925" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 19:36:21 addons-715925 crio[681]: time="2024-07-31 19:36:21.837024109Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722454581836992941,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ae05062-874b-4528-9758-386aeb261c93 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:36:21 addons-715925 crio[681]: time="2024-07-31 19:36:21.837511845Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9a1bc881-e189-4756-82e6-c3cedf41609a name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:36:21 addons-715925 crio[681]: time="2024-07-31 19:36:21.837586160Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a1bc881-e189-4756-82e6-c3cedf41609a name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:36:21 addons-715925 crio[681]: time="2024-07-31 19:36:21.838002599Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ed79966f6af9ecd4f1d92cff93ca01f67e231df8339417ed4a70a5bd37dc77a,PodSandboxId:4f059fa81b036762f55e37fc60476ed262fac5d8033b7bfdc7823c45ee08088e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722454574884511387,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-hw5cv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17f5ea4d-0f1d-4192-b5c5-8b98fc8ea159,},Annotations:map[string]string{io.kubernetes.container.hash: e455fdcd,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4eb5fac72ab9789e1ae64e1914c012999675d86c75bb25fc04024108f72f2af,PodSandboxId:90af3eb44c3b9b87f9e5cef21c5551cf0a9786b89ac1139ce572eba28d734387,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722454434696019220,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8401d1a8-6dd2-40c9-8e23-deb823f5b208,},Annotations:map[string]string{io.kubernet
es.container.hash: 62d342f3,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c174df487e549a44e4bcf555e30263a99c3c51908705a0c5f10e072b5549c6d8,PodSandboxId:ac6cc8b053bdb3bdb6b1af470a8f609ad7b6a80bae9836c268ad21042104db44,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722454357926521523,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a390ff63-8c7c-40de-a
874-20112644ffd4,},Annotations:map[string]string{io.kubernetes.container.hash: 2f8bde95,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b13432b806ed5af7648fbfa6684ab2b53c0ae4f960d4a8e8d795f23019e89e,PodSandboxId:1e2dd7d6e5d142787383541043cab58c84b0dc1a8897cb2d658effd5a310daeb,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722454272598618806,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-8smgp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8b1a27a4-6c31-416a-b17c-fd63272f66a9,},Anno
tations:map[string]string{io.kubernetes.container.hash: da9f8ed1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c0a0d9ea24c8f7b037f93184712db3555c0011ab694079725612736e2d36b92,PodSandboxId:e0a9936c2f86927c9632664b1ba9e7e8db3bfc1c5aac9d3f2ee032723998ec42,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722454271180200289,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-95v5x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 33319
19d-7251-49f9-b21d-55078786d8d9,},Annotations:map[string]string{io.kubernetes.container.hash: de725f98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0acc20d13d3bc1b75ebc726fafe9ff4ae146ce6bd01305036d3078a076c9e48d,PodSandboxId:ad5d705d2b17b756b1e64c67ff1ce241c932d5b1beba35de0e7359652e38ef4a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722454256728982665,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-s4tts,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 16f96003-84b9-4f23-a5c6-b1f5047bf0f7,},Annotations:map[string]string{io.kubernetes.container.hash: 501ef6d5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c88c0d9b413855503bc52c539befd82c696445beca7b2ce89e20c13859c542,PodSandboxId:b12d2a576bca9bb9bede1e19922be7fd3e2a99bfafbe9c8141823699a227e26f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722454207862452115,Labels:map[string]string{io.kubernetes.container.name: stora
ge-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 126127c5-8cd2-4f4e-8f76-e3bc2eb6eca3,},Annotations:map[string]string{io.kubernetes.container.hash: 131ec37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b5fe3d46d67178803535398ae11462cb0429aef871f008ebfbd08681ea4028c,PodSandboxId:c24ffde6a55f26d9d7699b205ed25be9b7beeae5ba21d8479993cd545de0743d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722454203005072752,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name
: coredns-7db6d8ff4d-fzb4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43b53489-b06e-4cb4-9515-be6b4e7f5588,},Annotations:map[string]string{io.kubernetes.container.hash: 6f89aeb2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13d77da1e019e7b2e6441e752b0606f228eed93cdcf09b3bc25d4fe86b47752a,PodSandboxId:e7a00f7c882a2a0c962b046e4dc63d3696cd64086ed16900736a407ffbac2c40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb02
5d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722454201493789328,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfzvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f30c198-5a23-42cb-8a8a-3e81ac3dce14,},Annotations:map[string]string{io.kubernetes.container.hash: b6458cb1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:779ef57c86f86b0d98dedac94a771d1ce30371244d6438e008697acc9e5bf9b8,PodSandboxId:9867fdd58f216124e97b07e78d2cf248e529890ddd0b0fbdeeb09128aba4d04f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd
422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722454179684668090,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-715925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d8dd9fb67173c0838ca349b97994d63,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c81b526e401e8d13a85b39fa802c3b87acaf639eb2eb96413420d1fcb5c42814,PodSandboxId:03c861eb8f9baa0b351139e9b61cc6c3bc50ecaa86cbafdf6e69cf27d10cbea7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a8
99,State:CONTAINER_RUNNING,CreatedAt:1722454179690805575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-715925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dcf05e09ab846407ce6f5cc016c5936,},Annotations:map[string]string{io.kubernetes.container.hash: e1633df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecf8437cb53cce09c68107be89bbbf45d96c20680905e648351258872ea756c8,PodSandboxId:7a615a0816098b4b57ae43b5a6c84653e02217b4a93a8a50b26eb461c3da170f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,Cr
eatedAt:1722454179635198579,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-715925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91dc58de568c063e3805468402f4b65e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7f5e7ab4069c46e45ea9fd19f37ce6e3e75d8124ef621d14425f38b33d0f0d5,PodSandboxId:4c9318500bde794a83060cb785866f7c0f0a8ab1b3cdc22ce7a8777fba61cf6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,C
reatedAt:1722454179588690580,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-715925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d24d4034029e15cb6159863f99c4af6,},Annotations:map[string]string{io.kubernetes.container.hash: 7ae5f20f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9a1bc881-e189-4756-82e6-c3cedf41609a name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:36:21 addons-715925 crio[681]: time="2024-07-31 19:36:21.877850656Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c4c9fc55-2e6b-4230-841a-e4471d1c7241 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:36:21 addons-715925 crio[681]: time="2024-07-31 19:36:21.877991270Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c4c9fc55-2e6b-4230-841a-e4471d1c7241 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:36:21 addons-715925 crio[681]: time="2024-07-31 19:36:21.879284489Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5250142d-90f7-40b3-893f-8c3d9462f6bf name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:36:21 addons-715925 crio[681]: time="2024-07-31 19:36:21.880591952Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722454581880565133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5250142d-90f7-40b3-893f-8c3d9462f6bf name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:36:21 addons-715925 crio[681]: time="2024-07-31 19:36:21.881286918Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=98b3d201-3980-4a8e-b8e6-3489c719f78c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:36:21 addons-715925 crio[681]: time="2024-07-31 19:36:21.881341594Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=98b3d201-3980-4a8e-b8e6-3489c719f78c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:36:21 addons-715925 crio[681]: time="2024-07-31 19:36:21.881656572Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ed79966f6af9ecd4f1d92cff93ca01f67e231df8339417ed4a70a5bd37dc77a,PodSandboxId:4f059fa81b036762f55e37fc60476ed262fac5d8033b7bfdc7823c45ee08088e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722454574884511387,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-hw5cv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17f5ea4d-0f1d-4192-b5c5-8b98fc8ea159,},Annotations:map[string]string{io.kubernetes.container.hash: e455fdcd,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4eb5fac72ab9789e1ae64e1914c012999675d86c75bb25fc04024108f72f2af,PodSandboxId:90af3eb44c3b9b87f9e5cef21c5551cf0a9786b89ac1139ce572eba28d734387,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722454434696019220,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8401d1a8-6dd2-40c9-8e23-deb823f5b208,},Annotations:map[string]string{io.kubernet
es.container.hash: 62d342f3,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c174df487e549a44e4bcf555e30263a99c3c51908705a0c5f10e072b5549c6d8,PodSandboxId:ac6cc8b053bdb3bdb6b1af470a8f609ad7b6a80bae9836c268ad21042104db44,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722454357926521523,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a390ff63-8c7c-40de-a
874-20112644ffd4,},Annotations:map[string]string{io.kubernetes.container.hash: 2f8bde95,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b13432b806ed5af7648fbfa6684ab2b53c0ae4f960d4a8e8d795f23019e89e,PodSandboxId:1e2dd7d6e5d142787383541043cab58c84b0dc1a8897cb2d658effd5a310daeb,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722454272598618806,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-8smgp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8b1a27a4-6c31-416a-b17c-fd63272f66a9,},Anno
tations:map[string]string{io.kubernetes.container.hash: da9f8ed1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c0a0d9ea24c8f7b037f93184712db3555c0011ab694079725612736e2d36b92,PodSandboxId:e0a9936c2f86927c9632664b1ba9e7e8db3bfc1c5aac9d3f2ee032723998ec42,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722454271180200289,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-95v5x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 33319
19d-7251-49f9-b21d-55078786d8d9,},Annotations:map[string]string{io.kubernetes.container.hash: de725f98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0acc20d13d3bc1b75ebc726fafe9ff4ae146ce6bd01305036d3078a076c9e48d,PodSandboxId:ad5d705d2b17b756b1e64c67ff1ce241c932d5b1beba35de0e7359652e38ef4a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722454256728982665,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-s4tts,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 16f96003-84b9-4f23-a5c6-b1f5047bf0f7,},Annotations:map[string]string{io.kubernetes.container.hash: 501ef6d5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c88c0d9b413855503bc52c539befd82c696445beca7b2ce89e20c13859c542,PodSandboxId:b12d2a576bca9bb9bede1e19922be7fd3e2a99bfafbe9c8141823699a227e26f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722454207862452115,Labels:map[string]string{io.kubernetes.container.name: stora
ge-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 126127c5-8cd2-4f4e-8f76-e3bc2eb6eca3,},Annotations:map[string]string{io.kubernetes.container.hash: 131ec37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b5fe3d46d67178803535398ae11462cb0429aef871f008ebfbd08681ea4028c,PodSandboxId:c24ffde6a55f26d9d7699b205ed25be9b7beeae5ba21d8479993cd545de0743d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722454203005072752,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name
: coredns-7db6d8ff4d-fzb4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43b53489-b06e-4cb4-9515-be6b4e7f5588,},Annotations:map[string]string{io.kubernetes.container.hash: 6f89aeb2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13d77da1e019e7b2e6441e752b0606f228eed93cdcf09b3bc25d4fe86b47752a,PodSandboxId:e7a00f7c882a2a0c962b046e4dc63d3696cd64086ed16900736a407ffbac2c40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb02
5d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722454201493789328,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfzvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f30c198-5a23-42cb-8a8a-3e81ac3dce14,},Annotations:map[string]string{io.kubernetes.container.hash: b6458cb1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:779ef57c86f86b0d98dedac94a771d1ce30371244d6438e008697acc9e5bf9b8,PodSandboxId:9867fdd58f216124e97b07e78d2cf248e529890ddd0b0fbdeeb09128aba4d04f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd
422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722454179684668090,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-715925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d8dd9fb67173c0838ca349b97994d63,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c81b526e401e8d13a85b39fa802c3b87acaf639eb2eb96413420d1fcb5c42814,PodSandboxId:03c861eb8f9baa0b351139e9b61cc6c3bc50ecaa86cbafdf6e69cf27d10cbea7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a8
99,State:CONTAINER_RUNNING,CreatedAt:1722454179690805575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-715925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dcf05e09ab846407ce6f5cc016c5936,},Annotations:map[string]string{io.kubernetes.container.hash: e1633df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecf8437cb53cce09c68107be89bbbf45d96c20680905e648351258872ea756c8,PodSandboxId:7a615a0816098b4b57ae43b5a6c84653e02217b4a93a8a50b26eb461c3da170f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,Cr
eatedAt:1722454179635198579,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-715925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91dc58de568c063e3805468402f4b65e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7f5e7ab4069c46e45ea9fd19f37ce6e3e75d8124ef621d14425f38b33d0f0d5,PodSandboxId:4c9318500bde794a83060cb785866f7c0f0a8ab1b3cdc22ce7a8777fba61cf6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,C
reatedAt:1722454179588690580,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-715925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d24d4034029e15cb6159863f99c4af6,},Annotations:map[string]string{io.kubernetes.container.hash: 7ae5f20f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=98b3d201-3980-4a8e-b8e6-3489c719f78c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:36:21 addons-715925 crio[681]: time="2024-07-31 19:36:21.920571081Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=082b33df-5579-4aea-b1e0-43ac0864ded1 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:36:21 addons-715925 crio[681]: time="2024-07-31 19:36:21.920649158Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=082b33df-5579-4aea-b1e0-43ac0864ded1 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:36:21 addons-715925 crio[681]: time="2024-07-31 19:36:21.921760185Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5876ee8b-4fdd-418a-85af-74b5f94c099f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:36:21 addons-715925 crio[681]: time="2024-07-31 19:36:21.923019711Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722454581922992221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5876ee8b-4fdd-418a-85af-74b5f94c099f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:36:21 addons-715925 crio[681]: time="2024-07-31 19:36:21.923788046Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d1820f57-8690-4c4e-a446-58eebbe3bee7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:36:21 addons-715925 crio[681]: time="2024-07-31 19:36:21.923978086Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d1820f57-8690-4c4e-a446-58eebbe3bee7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:36:21 addons-715925 crio[681]: time="2024-07-31 19:36:21.924451228Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ed79966f6af9ecd4f1d92cff93ca01f67e231df8339417ed4a70a5bd37dc77a,PodSandboxId:4f059fa81b036762f55e37fc60476ed262fac5d8033b7bfdc7823c45ee08088e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722454574884511387,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-hw5cv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17f5ea4d-0f1d-4192-b5c5-8b98fc8ea159,},Annotations:map[string]string{io.kubernetes.container.hash: e455fdcd,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4eb5fac72ab9789e1ae64e1914c012999675d86c75bb25fc04024108f72f2af,PodSandboxId:90af3eb44c3b9b87f9e5cef21c5551cf0a9786b89ac1139ce572eba28d734387,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722454434696019220,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8401d1a8-6dd2-40c9-8e23-deb823f5b208,},Annotations:map[string]string{io.kubernet
es.container.hash: 62d342f3,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c174df487e549a44e4bcf555e30263a99c3c51908705a0c5f10e072b5549c6d8,PodSandboxId:ac6cc8b053bdb3bdb6b1af470a8f609ad7b6a80bae9836c268ad21042104db44,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722454357926521523,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a390ff63-8c7c-40de-a
874-20112644ffd4,},Annotations:map[string]string{io.kubernetes.container.hash: 2f8bde95,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b13432b806ed5af7648fbfa6684ab2b53c0ae4f960d4a8e8d795f23019e89e,PodSandboxId:1e2dd7d6e5d142787383541043cab58c84b0dc1a8897cb2d658effd5a310daeb,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722454272598618806,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-8smgp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8b1a27a4-6c31-416a-b17c-fd63272f66a9,},Anno
tations:map[string]string{io.kubernetes.container.hash: da9f8ed1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c0a0d9ea24c8f7b037f93184712db3555c0011ab694079725612736e2d36b92,PodSandboxId:e0a9936c2f86927c9632664b1ba9e7e8db3bfc1c5aac9d3f2ee032723998ec42,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722454271180200289,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-95v5x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 33319
19d-7251-49f9-b21d-55078786d8d9,},Annotations:map[string]string{io.kubernetes.container.hash: de725f98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0acc20d13d3bc1b75ebc726fafe9ff4ae146ce6bd01305036d3078a076c9e48d,PodSandboxId:ad5d705d2b17b756b1e64c67ff1ce241c932d5b1beba35de0e7359652e38ef4a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722454256728982665,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-s4tts,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 16f96003-84b9-4f23-a5c6-b1f5047bf0f7,},Annotations:map[string]string{io.kubernetes.container.hash: 501ef6d5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c88c0d9b413855503bc52c539befd82c696445beca7b2ce89e20c13859c542,PodSandboxId:b12d2a576bca9bb9bede1e19922be7fd3e2a99bfafbe9c8141823699a227e26f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722454207862452115,Labels:map[string]string{io.kubernetes.container.name: stora
ge-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 126127c5-8cd2-4f4e-8f76-e3bc2eb6eca3,},Annotations:map[string]string{io.kubernetes.container.hash: 131ec37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b5fe3d46d67178803535398ae11462cb0429aef871f008ebfbd08681ea4028c,PodSandboxId:c24ffde6a55f26d9d7699b205ed25be9b7beeae5ba21d8479993cd545de0743d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722454203005072752,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name
: coredns-7db6d8ff4d-fzb4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43b53489-b06e-4cb4-9515-be6b4e7f5588,},Annotations:map[string]string{io.kubernetes.container.hash: 6f89aeb2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13d77da1e019e7b2e6441e752b0606f228eed93cdcf09b3bc25d4fe86b47752a,PodSandboxId:e7a00f7c882a2a0c962b046e4dc63d3696cd64086ed16900736a407ffbac2c40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb02
5d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722454201493789328,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfzvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f30c198-5a23-42cb-8a8a-3e81ac3dce14,},Annotations:map[string]string{io.kubernetes.container.hash: b6458cb1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:779ef57c86f86b0d98dedac94a771d1ce30371244d6438e008697acc9e5bf9b8,PodSandboxId:9867fdd58f216124e97b07e78d2cf248e529890ddd0b0fbdeeb09128aba4d04f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd
422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722454179684668090,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-715925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d8dd9fb67173c0838ca349b97994d63,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c81b526e401e8d13a85b39fa802c3b87acaf639eb2eb96413420d1fcb5c42814,PodSandboxId:03c861eb8f9baa0b351139e9b61cc6c3bc50ecaa86cbafdf6e69cf27d10cbea7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a8
99,State:CONTAINER_RUNNING,CreatedAt:1722454179690805575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-715925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dcf05e09ab846407ce6f5cc016c5936,},Annotations:map[string]string{io.kubernetes.container.hash: e1633df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecf8437cb53cce09c68107be89bbbf45d96c20680905e648351258872ea756c8,PodSandboxId:7a615a0816098b4b57ae43b5a6c84653e02217b4a93a8a50b26eb461c3da170f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,Cr
eatedAt:1722454179635198579,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-715925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91dc58de568c063e3805468402f4b65e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7f5e7ab4069c46e45ea9fd19f37ce6e3e75d8124ef621d14425f38b33d0f0d5,PodSandboxId:4c9318500bde794a83060cb785866f7c0f0a8ab1b3cdc22ce7a8777fba61cf6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,C
reatedAt:1722454179588690580,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-715925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d24d4034029e15cb6159863f99c4af6,},Annotations:map[string]string{io.kubernetes.container.hash: 7ae5f20f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d1820f57-8690-4c4e-a446-58eebbe3bee7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:36:21 addons-715925 crio[681]: time="2024-07-31 19:36:21.956506185Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d051775e-f2a2-416c-b111-81ee0cbbef06 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:36:21 addons-715925 crio[681]: time="2024-07-31 19:36:21.956597985Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d051775e-f2a2-416c-b111-81ee0cbbef06 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:36:21 addons-715925 crio[681]: time="2024-07-31 19:36:21.957935541Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fdd67ba3-795d-4c87-a18c-505c853535ed name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:36:21 addons-715925 crio[681]: time="2024-07-31 19:36:21.959104247Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722454581959081799,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fdd67ba3-795d-4c87-a18c-505c853535ed name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:36:21 addons-715925 crio[681]: time="2024-07-31 19:36:21.959781995Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8f497e51-05a1-4a90-b1f9-cd3f90b7086e name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:36:21 addons-715925 crio[681]: time="2024-07-31 19:36:21.959853614Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8f497e51-05a1-4a90-b1f9-cd3f90b7086e name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:36:21 addons-715925 crio[681]: time="2024-07-31 19:36:21.960164849Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ed79966f6af9ecd4f1d92cff93ca01f67e231df8339417ed4a70a5bd37dc77a,PodSandboxId:4f059fa81b036762f55e37fc60476ed262fac5d8033b7bfdc7823c45ee08088e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722454574884511387,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-hw5cv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17f5ea4d-0f1d-4192-b5c5-8b98fc8ea159,},Annotations:map[string]string{io.kubernetes.container.hash: e455fdcd,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4eb5fac72ab9789e1ae64e1914c012999675d86c75bb25fc04024108f72f2af,PodSandboxId:90af3eb44c3b9b87f9e5cef21c5551cf0a9786b89ac1139ce572eba28d734387,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722454434696019220,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8401d1a8-6dd2-40c9-8e23-deb823f5b208,},Annotations:map[string]string{io.kubernet
es.container.hash: 62d342f3,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c174df487e549a44e4bcf555e30263a99c3c51908705a0c5f10e072b5549c6d8,PodSandboxId:ac6cc8b053bdb3bdb6b1af470a8f609ad7b6a80bae9836c268ad21042104db44,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722454357926521523,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a390ff63-8c7c-40de-a
874-20112644ffd4,},Annotations:map[string]string{io.kubernetes.container.hash: 2f8bde95,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b13432b806ed5af7648fbfa6684ab2b53c0ae4f960d4a8e8d795f23019e89e,PodSandboxId:1e2dd7d6e5d142787383541043cab58c84b0dc1a8897cb2d658effd5a310daeb,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722454272598618806,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-8smgp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8b1a27a4-6c31-416a-b17c-fd63272f66a9,},Anno
tations:map[string]string{io.kubernetes.container.hash: da9f8ed1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c0a0d9ea24c8f7b037f93184712db3555c0011ab694079725612736e2d36b92,PodSandboxId:e0a9936c2f86927c9632664b1ba9e7e8db3bfc1c5aac9d3f2ee032723998ec42,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722454271180200289,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-95v5x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 33319
19d-7251-49f9-b21d-55078786d8d9,},Annotations:map[string]string{io.kubernetes.container.hash: de725f98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0acc20d13d3bc1b75ebc726fafe9ff4ae146ce6bd01305036d3078a076c9e48d,PodSandboxId:ad5d705d2b17b756b1e64c67ff1ce241c932d5b1beba35de0e7359652e38ef4a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722454256728982665,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-s4tts,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 16f96003-84b9-4f23-a5c6-b1f5047bf0f7,},Annotations:map[string]string{io.kubernetes.container.hash: 501ef6d5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c88c0d9b413855503bc52c539befd82c696445beca7b2ce89e20c13859c542,PodSandboxId:b12d2a576bca9bb9bede1e19922be7fd3e2a99bfafbe9c8141823699a227e26f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722454207862452115,Labels:map[string]string{io.kubernetes.container.name: stora
ge-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 126127c5-8cd2-4f4e-8f76-e3bc2eb6eca3,},Annotations:map[string]string{io.kubernetes.container.hash: 131ec37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b5fe3d46d67178803535398ae11462cb0429aef871f008ebfbd08681ea4028c,PodSandboxId:c24ffde6a55f26d9d7699b205ed25be9b7beeae5ba21d8479993cd545de0743d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722454203005072752,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name
: coredns-7db6d8ff4d-fzb4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43b53489-b06e-4cb4-9515-be6b4e7f5588,},Annotations:map[string]string{io.kubernetes.container.hash: 6f89aeb2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13d77da1e019e7b2e6441e752b0606f228eed93cdcf09b3bc25d4fe86b47752a,PodSandboxId:e7a00f7c882a2a0c962b046e4dc63d3696cd64086ed16900736a407ffbac2c40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb02
5d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722454201493789328,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfzvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f30c198-5a23-42cb-8a8a-3e81ac3dce14,},Annotations:map[string]string{io.kubernetes.container.hash: b6458cb1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:779ef57c86f86b0d98dedac94a771d1ce30371244d6438e008697acc9e5bf9b8,PodSandboxId:9867fdd58f216124e97b07e78d2cf248e529890ddd0b0fbdeeb09128aba4d04f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd
422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722454179684668090,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-715925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d8dd9fb67173c0838ca349b97994d63,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c81b526e401e8d13a85b39fa802c3b87acaf639eb2eb96413420d1fcb5c42814,PodSandboxId:03c861eb8f9baa0b351139e9b61cc6c3bc50ecaa86cbafdf6e69cf27d10cbea7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a8
99,State:CONTAINER_RUNNING,CreatedAt:1722454179690805575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-715925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dcf05e09ab846407ce6f5cc016c5936,},Annotations:map[string]string{io.kubernetes.container.hash: e1633df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecf8437cb53cce09c68107be89bbbf45d96c20680905e648351258872ea756c8,PodSandboxId:7a615a0816098b4b57ae43b5a6c84653e02217b4a93a8a50b26eb461c3da170f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,Cr
eatedAt:1722454179635198579,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-715925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91dc58de568c063e3805468402f4b65e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7f5e7ab4069c46e45ea9fd19f37ce6e3e75d8124ef621d14425f38b33d0f0d5,PodSandboxId:4c9318500bde794a83060cb785866f7c0f0a8ab1b3cdc22ce7a8777fba61cf6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,C
reatedAt:1722454179588690580,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-715925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d24d4034029e15cb6159863f99c4af6,},Annotations:map[string]string{io.kubernetes.container.hash: 7ae5f20f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8f497e51-05a1-4a90-b1f9-cd3f90b7086e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3ed79966f6af9       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   4f059fa81b036       hello-world-app-6778b5fc9f-hw5cv
	a4eb5fac72ab9       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                              2 minutes ago       Running             nginx                     0                   90af3eb44c3b9       nginx
	c174df487e549       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   ac6cc8b053bdb       busybox
	d7b13432b806e       684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66                                                             5 minutes ago       Exited              patch                     1                   1e2dd7d6e5d14       ingress-nginx-admission-patch-8smgp
	4c0a0d9ea24c8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   5 minutes ago       Exited              create                    0                   e0a9936c2f869       ingress-nginx-admission-create-95v5x
	0acc20d13d3bc       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        5 minutes ago       Running             metrics-server            0                   ad5d705d2b17b       metrics-server-c59844bb4-s4tts
	09c88c0d9b413       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             6 minutes ago       Running             storage-provisioner       0                   b12d2a576bca9       storage-provisioner
	3b5fe3d46d671       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             6 minutes ago       Running             coredns                   0                   c24ffde6a55f2       coredns-7db6d8ff4d-fzb4m
	13d77da1e019e       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                             6 minutes ago       Running             kube-proxy                0                   e7a00f7c882a2       kube-proxy-tfzvz
	c81b526e401e8       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             6 minutes ago       Running             etcd                      0                   03c861eb8f9ba       etcd-addons-715925
	779ef57c86f86       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                             6 minutes ago       Running             kube-scheduler            0                   9867fdd58f216       kube-scheduler-addons-715925
	ecf8437cb53cc       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                             6 minutes ago       Running             kube-controller-manager   0                   7a615a0816098       kube-controller-manager-addons-715925
	b7f5e7ab4069c       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                             6 minutes ago       Running             kube-apiserver            0                   4c9318500bde7       kube-apiserver-addons-715925
	
	
	==> coredns [3b5fe3d46d67178803535398ae11462cb0429aef871f008ebfbd08681ea4028c] <==
	[INFO] 10.244.0.7:50331 - 9430 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000163087s
	[INFO] 10.244.0.7:40064 - 62706 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00006072s
	[INFO] 10.244.0.7:40064 - 59340 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000085788s
	[INFO] 10.244.0.7:36130 - 43402 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00003539s
	[INFO] 10.244.0.7:36130 - 14964 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000050617s
	[INFO] 10.244.0.7:35793 - 44276 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000054444s
	[INFO] 10.244.0.7:35793 - 28919 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000126933s
	[INFO] 10.244.0.7:41597 - 54420 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000067684s
	[INFO] 10.244.0.7:41597 - 15249 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000168983s
	[INFO] 10.244.0.7:51563 - 54876 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000108786s
	[INFO] 10.244.0.7:51563 - 50266 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000131052s
	[INFO] 10.244.0.7:34565 - 35957 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000036978s
	[INFO] 10.244.0.7:34565 - 44663 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000042174s
	[INFO] 10.244.0.7:42123 - 49510 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00005782s
	[INFO] 10.244.0.7:42123 - 19303 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000028853s
	[INFO] 10.244.0.22:42275 - 3819 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000166041s
	[INFO] 10.244.0.22:46321 - 27025 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000807326s
	[INFO] 10.244.0.22:38141 - 32280 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123722s
	[INFO] 10.244.0.22:55685 - 39294 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000274728s
	[INFO] 10.244.0.22:49384 - 11311 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000116954s
	[INFO] 10.244.0.22:59354 - 10926 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00026542s
	[INFO] 10.244.0.22:49641 - 21222 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000694673s
	[INFO] 10.244.0.22:37007 - 25487 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000712432s
	[INFO] 10.244.0.27:54379 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000364485s
	[INFO] 10.244.0.27:55795 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000098202s
	
	
	==> describe nodes <==
	Name:               addons-715925
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-715925
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825
	                    minikube.k8s.io/name=addons-715925
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T19_29_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-715925
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 19:29:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-715925
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 19:36:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 19:34:21 +0000   Wed, 31 Jul 2024 19:29:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 19:34:21 +0000   Wed, 31 Jul 2024 19:29:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 19:34:21 +0000   Wed, 31 Jul 2024 19:29:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 19:34:21 +0000   Wed, 31 Jul 2024 19:29:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.147
	  Hostname:    addons-715925
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 c12009eb379d4987aaee89629ea0d81e
	  System UUID:                c12009eb-379d-4987-aaee-89629ea0d81e
	  Boot ID:                    db862b72-c89b-4454-bb24-c704de455a63
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  default                     hello-world-app-6778b5fc9f-hw5cv         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  kube-system                 coredns-7db6d8ff4d-fzb4m                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     6m24s
	  kube-system                 etcd-addons-715925                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m37s
	  kube-system                 kube-apiserver-addons-715925             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m37s
	  kube-system                 kube-controller-manager-addons-715925    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m37s
	  kube-system                 kube-proxy-tfzvz                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m24s
	  kube-system                 kube-scheduler-addons-715925             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m37s
	  kube-system                 metrics-server-c59844bb4-s4tts           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         6m18s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m17s                  kube-proxy       
	  Normal  Starting                 6m44s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m43s (x8 over 6m44s)  kubelet          Node addons-715925 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m43s (x8 over 6m44s)  kubelet          Node addons-715925 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m43s (x7 over 6m44s)  kubelet          Node addons-715925 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m37s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m37s                  kubelet          Node addons-715925 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m37s                  kubelet          Node addons-715925 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m37s                  kubelet          Node addons-715925 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m36s                  kubelet          Node addons-715925 status is now: NodeReady
	  Normal  RegisteredNode           6m25s                  node-controller  Node addons-715925 event: Registered Node addons-715925 in Controller
	
	
	==> dmesg <==
	[  +5.365336] kauditd_printk_skb: 126 callbacks suppressed
	[  +6.532665] kauditd_printk_skb: 99 callbacks suppressed
	[ +37.616437] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.678739] kauditd_printk_skb: 30 callbacks suppressed
	[Jul31 19:31] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.212974] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.027127] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.583940] kauditd_printk_skb: 10 callbacks suppressed
	[Jul31 19:32] kauditd_printk_skb: 24 callbacks suppressed
	[ +16.959149] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.150327] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.201404] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.421112] kauditd_printk_skb: 4 callbacks suppressed
	[ +16.123969] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.101261] kauditd_printk_skb: 47 callbacks suppressed
	[Jul31 19:33] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.114737] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.415201] kauditd_printk_skb: 67 callbacks suppressed
	[  +5.450189] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.430019] kauditd_printk_skb: 12 callbacks suppressed
	[ +10.441184] kauditd_printk_skb: 65 callbacks suppressed
	[  +5.852375] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.556726] kauditd_printk_skb: 6 callbacks suppressed
	[Jul31 19:36] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.128661] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [c81b526e401e8d13a85b39fa802c3b87acaf639eb2eb96413420d1fcb5c42814] <==
	{"level":"warn","ts":"2024-07-31T19:31:14.578663Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"225.51853ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.147\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-07-31T19:31:14.578678Z","caller":"traceutil/trace.go:171","msg":"trace[593203547] range","detail":"{range_begin:/registry/masterleases/192.168.39.147; range_end:; response_count:1; response_revision:1055; }","duration":"225.557969ms","start":"2024-07-31T19:31:14.353115Z","end":"2024-07-31T19:31:14.578673Z","steps":["trace[593203547] 'agreement among raft nodes before linearized reading'  (duration: 225.499513ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T19:31:25.245605Z","caller":"traceutil/trace.go:171","msg":"trace[640099674] transaction","detail":"{read_only:false; response_revision:1157; number_of_response:1; }","duration":"382.600336ms","start":"2024-07-31T19:31:24.862924Z","end":"2024-07-31T19:31:25.245524Z","steps":["trace[640099674] 'process raft request'  (duration: 382.275219ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T19:31:25.246409Z","caller":"traceutil/trace.go:171","msg":"trace[1689728] linearizableReadLoop","detail":"{readStateIndex:1196; appliedIndex:1196; }","duration":"204.261325ms","start":"2024-07-31T19:31:25.041724Z","end":"2024-07-31T19:31:25.245986Z","steps":["trace[1689728] 'read index received'  (duration: 204.257116ms)","trace[1689728] 'applied index is now lower than readState.Index'  (duration: 3.634µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T19:31:25.246604Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.811463ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T19:31:25.246912Z","caller":"traceutil/trace.go:171","msg":"trace[1187332687] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1157; }","duration":"205.143161ms","start":"2024-07-31T19:31:25.041702Z","end":"2024-07-31T19:31:25.246845Z","steps":["trace[1187332687] 'agreement among raft nodes before linearized reading'  (duration: 204.733646ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T19:31:25.247484Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T19:31:24.86286Z","time spent":"383.846221ms","remote":"127.0.0.1:51002","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1117 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"warn","ts":"2024-07-31T19:31:25.250962Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.31896ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85649"}
	{"level":"info","ts":"2024-07-31T19:31:25.25107Z","caller":"traceutil/trace.go:171","msg":"trace[1549059227] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1158; }","duration":"159.452798ms","start":"2024-07-31T19:31:25.091609Z","end":"2024-07-31T19:31:25.251061Z","steps":["trace[1549059227] 'agreement among raft nodes before linearized reading'  (duration: 159.171833ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T19:32:29.04634Z","caller":"traceutil/trace.go:171","msg":"trace[1974567133] linearizableReadLoop","detail":"{readStateIndex:1339; appliedIndex:1338; }","duration":"279.102742ms","start":"2024-07-31T19:32:28.767211Z","end":"2024-07-31T19:32:29.046314Z","steps":["trace[1974567133] 'read index received'  (duration: 278.98642ms)","trace[1974567133] 'applied index is now lower than readState.Index'  (duration: 115.784µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T19:32:29.046554Z","caller":"traceutil/trace.go:171","msg":"trace[2035373745] transaction","detail":"{read_only:false; response_revision:1286; number_of_response:1; }","duration":"464.125858ms","start":"2024-07-31T19:32:28.582412Z","end":"2024-07-31T19:32:29.046538Z","steps":["trace[2035373745] 'process raft request'  (duration: 463.80075ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T19:32:29.046709Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T19:32:28.582398Z","time spent":"464.207879ms","remote":"127.0.0.1:50934","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1282 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-07-31T19:32:29.046823Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"279.599749ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:1 size:4367"}
	{"level":"info","ts":"2024-07-31T19:32:29.046952Z","caller":"traceutil/trace.go:171","msg":"trace[1826426069] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:1; response_revision:1286; }","duration":"279.732516ms","start":"2024-07-31T19:32:28.767207Z","end":"2024-07-31T19:32:29.046939Z","steps":["trace[1826426069] 'agreement among raft nodes before linearized reading'  (duration: 279.365528ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T19:32:29.047022Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"243.699097ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-07-31T19:32:29.047063Z","caller":"traceutil/trace.go:171","msg":"trace[1889849188] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1286; }","duration":"243.760946ms","start":"2024-07-31T19:32:28.803295Z","end":"2024-07-31T19:32:29.047056Z","steps":["trace[1889849188] 'agreement among raft nodes before linearized reading'  (duration: 243.624503ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T19:33:38.689559Z","caller":"traceutil/trace.go:171","msg":"trace[164354990] linearizableReadLoop","detail":"{readStateIndex:1931; appliedIndex:1930; }","duration":"398.653486ms","start":"2024-07-31T19:33:38.290861Z","end":"2024-07-31T19:33:38.689515Z","steps":["trace[164354990] 'read index received'  (duration: 398.433375ms)","trace[164354990] 'applied index is now lower than readState.Index'  (duration: 219.674µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T19:33:38.689819Z","caller":"traceutil/trace.go:171","msg":"trace[629265884] transaction","detail":"{read_only:false; response_revision:1852; number_of_response:1; }","duration":"412.02375ms","start":"2024-07-31T19:33:38.277776Z","end":"2024-07-31T19:33:38.6898Z","steps":["trace[629265884] 'process raft request'  (duration: 411.550986ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T19:33:38.690074Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"399.18898ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3609"}
	{"level":"warn","ts":"2024-07-31T19:33:38.690111Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T19:33:38.277755Z","time spent":"412.203683ms","remote":"127.0.0.1:51002","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":486,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:1766 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:427 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >"}
	{"level":"warn","ts":"2024-07-31T19:33:38.690244Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"267.109161ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses/csi-hostpath-snapclass\" ","response":"range_response_count:1 size:1176"}
	{"level":"info","ts":"2024-07-31T19:33:38.690299Z","caller":"traceutil/trace.go:171","msg":"trace[525055464] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses/csi-hostpath-snapclass; range_end:; response_count:1; response_revision:1852; }","duration":"267.183598ms","start":"2024-07-31T19:33:38.423105Z","end":"2024-07-31T19:33:38.690289Z","steps":["trace[525055464] 'agreement among raft nodes before linearized reading'  (duration: 267.087732ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T19:33:38.690133Z","caller":"traceutil/trace.go:171","msg":"trace[502315660] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1852; }","duration":"399.28118ms","start":"2024-07-31T19:33:38.290837Z","end":"2024-07-31T19:33:38.690118Z","steps":["trace[502315660] 'agreement among raft nodes before linearized reading'  (duration: 399.098813ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T19:33:38.69115Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T19:33:38.290825Z","time spent":"400.31255ms","remote":"127.0.0.1:50950","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":3632,"request content":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" "}
	{"level":"info","ts":"2024-07-31T19:34:29.134543Z","caller":"traceutil/trace.go:171","msg":"trace[586832778] transaction","detail":"{read_only:false; response_revision:2026; number_of_response:1; }","duration":"115.065421ms","start":"2024-07-31T19:34:29.019447Z","end":"2024-07-31T19:34:29.134513Z","steps":["trace[586832778] 'process raft request'  (duration: 114.776167ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:36:22 up 7 min,  0 users,  load average: 0.48, 0.94, 0.57
	Linux addons-715925 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b7f5e7ab4069c46e45ea9fd19f37ce6e3e75d8124ef621d14425f38b33d0f0d5] <==
	I0731 19:32:05.319421       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0731 19:32:44.722072       1 conn.go:339] Error on socket receive: read tcp 192.168.39.147:8443->192.168.39.1:55032: use of closed network connection
	E0731 19:32:44.940270       1 conn.go:339] Error on socket receive: read tcp 192.168.39.147:8443->192.168.39.1:55056: use of closed network connection
	I0731 19:33:13.870457       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0731 19:33:28.344401       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0731 19:33:28.560705       1 watch.go:250] http2: stream closed
	I0731 19:33:30.222315       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.73.217"}
	I0731 19:33:38.725385       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 19:33:38.725480       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 19:33:38.760821       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 19:33:38.760914       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 19:33:38.766490       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 19:33:38.766558       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 19:33:38.809024       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 19:33:38.809056       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 19:33:38.854787       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 19:33:38.855085       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0731 19:33:39.766945       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0731 19:33:39.855457       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0731 19:33:39.888209       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0731 19:33:44.644176       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0731 19:33:45.683583       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0731 19:33:50.125092       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0731 19:33:50.302755       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.191.185"}
	I0731 19:36:11.951771       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.241.161"}
	
	
	==> kube-controller-manager [ecf8437cb53cce09c68107be89bbbf45d96c20680905e648351258872ea756c8] <==
	E0731 19:34:48.820841       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 19:34:51.444092       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 19:34:51.444183       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 19:35:22.110571       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 19:35:22.110737       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 19:35:27.862584       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 19:35:27.862732       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 19:35:29.073538       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 19:35:29.073593       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 19:35:46.201162       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 19:35:46.201217       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0731 19:36:11.819547       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="39.887578ms"
	I0731 19:36:11.836511       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="16.867469ms"
	I0731 19:36:11.836591       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="36.773µs"
	W0731 19:36:12.330602       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 19:36:12.330661       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0731 19:36:14.021140       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0731 19:36:14.025588       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6d9bd977d4" duration="8.366µs"
	I0731 19:36:14.029914       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0731 19:36:15.430071       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="8.726558ms"
	I0731 19:36:15.430578       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="93.716µs"
	W0731 19:36:16.368490       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 19:36:16.368548       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 19:36:22.216116       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 19:36:22.216152       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [13d77da1e019e7b2e6441e752b0606f228eed93cdcf09b3bc25d4fe86b47752a] <==
	I0731 19:30:02.839547       1 server_linux.go:69] "Using iptables proxy"
	I0731 19:30:02.909990       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.147"]
	I0731 19:30:04.573319       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 19:30:04.573364       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 19:30:04.573380       1 server_linux.go:165] "Using iptables Proxier"
	I0731 19:30:04.607464       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 19:30:04.607737       1 server.go:872] "Version info" version="v1.30.3"
	I0731 19:30:04.607753       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 19:30:04.627535       1 config.go:192] "Starting service config controller"
	I0731 19:30:04.627555       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 19:30:04.627585       1 config.go:101] "Starting endpoint slice config controller"
	I0731 19:30:04.627589       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 19:30:04.630647       1 config.go:319] "Starting node config controller"
	I0731 19:30:04.630657       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 19:30:04.730014       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 19:30:04.730069       1 shared_informer.go:320] Caches are synced for service config
	I0731 19:30:04.756100       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [779ef57c86f86b0d98dedac94a771d1ce30371244d6438e008697acc9e5bf9b8] <==
	W0731 19:29:42.666249       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 19:29:42.666284       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 19:29:42.666301       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 19:29:42.666307       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 19:29:42.666353       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 19:29:42.666381       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 19:29:42.666450       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 19:29:42.666461       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 19:29:43.497060       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 19:29:43.497168       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 19:29:43.591383       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 19:29:43.591751       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 19:29:43.667489       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 19:29:43.667624       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 19:29:43.770363       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 19:29:43.770530       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 19:29:43.847949       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 19:29:43.848519       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 19:29:43.872792       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 19:29:43.874292       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 19:29:43.901687       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 19:29:43.901791       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 19:29:43.962045       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 19:29:43.962140       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0731 19:29:45.536248       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 19:36:11 addons-715925 kubelet[1266]: I0731 19:36:11.816242    1266 memory_manager.go:354] "RemoveStaleState removing state" podUID="db698a8e-e32b-4ee0-93a6-82cc059e7064" containerName="gadget"
	Jul 31 19:36:11 addons-715925 kubelet[1266]: I0731 19:36:11.816285    1266 memory_manager.go:354] "RemoveStaleState removing state" podUID="db698a8e-e32b-4ee0-93a6-82cc059e7064" containerName="gadget"
	Jul 31 19:36:11 addons-715925 kubelet[1266]: I0731 19:36:11.935077    1266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjggb\" (UniqueName: \"kubernetes.io/projected/17f5ea4d-0f1d-4192-b5c5-8b98fc8ea159-kube-api-access-sjggb\") pod \"hello-world-app-6778b5fc9f-hw5cv\" (UID: \"17f5ea4d-0f1d-4192-b5c5-8b98fc8ea159\") " pod="default/hello-world-app-6778b5fc9f-hw5cv"
	Jul 31 19:36:13 addons-715925 kubelet[1266]: I0731 19:36:13.042407    1266 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fntr9\" (UniqueName: \"kubernetes.io/projected/bbc90c8c-9f3d-43fa-bd6d-1bbfc26c8397-kube-api-access-fntr9\") pod \"bbc90c8c-9f3d-43fa-bd6d-1bbfc26c8397\" (UID: \"bbc90c8c-9f3d-43fa-bd6d-1bbfc26c8397\") "
	Jul 31 19:36:13 addons-715925 kubelet[1266]: I0731 19:36:13.044657    1266 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbc90c8c-9f3d-43fa-bd6d-1bbfc26c8397-kube-api-access-fntr9" (OuterVolumeSpecName: "kube-api-access-fntr9") pod "bbc90c8c-9f3d-43fa-bd6d-1bbfc26c8397" (UID: "bbc90c8c-9f3d-43fa-bd6d-1bbfc26c8397"). InnerVolumeSpecName "kube-api-access-fntr9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 31 19:36:13 addons-715925 kubelet[1266]: I0731 19:36:13.142637    1266 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-fntr9\" (UniqueName: \"kubernetes.io/projected/bbc90c8c-9f3d-43fa-bd6d-1bbfc26c8397-kube-api-access-fntr9\") on node \"addons-715925\" DevicePath \"\""
	Jul 31 19:36:13 addons-715925 kubelet[1266]: I0731 19:36:13.392259    1266 scope.go:117] "RemoveContainer" containerID="d727990df4a5a9db35c6deed93d80f1bc7be53402ce66e9e133dc56ba3245071"
	Jul 31 19:36:13 addons-715925 kubelet[1266]: I0731 19:36:13.414079    1266 scope.go:117] "RemoveContainer" containerID="d727990df4a5a9db35c6deed93d80f1bc7be53402ce66e9e133dc56ba3245071"
	Jul 31 19:36:13 addons-715925 kubelet[1266]: E0731 19:36:13.414998    1266 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d727990df4a5a9db35c6deed93d80f1bc7be53402ce66e9e133dc56ba3245071\": container with ID starting with d727990df4a5a9db35c6deed93d80f1bc7be53402ce66e9e133dc56ba3245071 not found: ID does not exist" containerID="d727990df4a5a9db35c6deed93d80f1bc7be53402ce66e9e133dc56ba3245071"
	Jul 31 19:36:13 addons-715925 kubelet[1266]: I0731 19:36:13.415043    1266 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d727990df4a5a9db35c6deed93d80f1bc7be53402ce66e9e133dc56ba3245071"} err="failed to get container status \"d727990df4a5a9db35c6deed93d80f1bc7be53402ce66e9e133dc56ba3245071\": rpc error: code = NotFound desc = could not find container \"d727990df4a5a9db35c6deed93d80f1bc7be53402ce66e9e133dc56ba3245071\": container with ID starting with d727990df4a5a9db35c6deed93d80f1bc7be53402ce66e9e133dc56ba3245071 not found: ID does not exist"
	Jul 31 19:36:15 addons-715925 kubelet[1266]: I0731 19:36:15.178380    1266 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3331919d-7251-49f9-b21d-55078786d8d9" path="/var/lib/kubelet/pods/3331919d-7251-49f9-b21d-55078786d8d9/volumes"
	Jul 31 19:36:15 addons-715925 kubelet[1266]: I0731 19:36:15.178854    1266 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b1a27a4-6c31-416a-b17c-fd63272f66a9" path="/var/lib/kubelet/pods/8b1a27a4-6c31-416a-b17c-fd63272f66a9/volumes"
	Jul 31 19:36:15 addons-715925 kubelet[1266]: I0731 19:36:15.180602    1266 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbc90c8c-9f3d-43fa-bd6d-1bbfc26c8397" path="/var/lib/kubelet/pods/bbc90c8c-9f3d-43fa-bd6d-1bbfc26c8397/volumes"
	Jul 31 19:36:17 addons-715925 kubelet[1266]: I0731 19:36:17.277235    1266 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tfcx\" (UniqueName: \"kubernetes.io/projected/69a72872-af54-439f-b266-7430ac8d546c-kube-api-access-9tfcx\") pod \"69a72872-af54-439f-b266-7430ac8d546c\" (UID: \"69a72872-af54-439f-b266-7430ac8d546c\") "
	Jul 31 19:36:17 addons-715925 kubelet[1266]: I0731 19:36:17.277285    1266 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/69a72872-af54-439f-b266-7430ac8d546c-webhook-cert\") pod \"69a72872-af54-439f-b266-7430ac8d546c\" (UID: \"69a72872-af54-439f-b266-7430ac8d546c\") "
	Jul 31 19:36:17 addons-715925 kubelet[1266]: I0731 19:36:17.279309    1266 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69a72872-af54-439f-b266-7430ac8d546c-kube-api-access-9tfcx" (OuterVolumeSpecName: "kube-api-access-9tfcx") pod "69a72872-af54-439f-b266-7430ac8d546c" (UID: "69a72872-af54-439f-b266-7430ac8d546c"). InnerVolumeSpecName "kube-api-access-9tfcx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 31 19:36:17 addons-715925 kubelet[1266]: I0731 19:36:17.282364    1266 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69a72872-af54-439f-b266-7430ac8d546c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "69a72872-af54-439f-b266-7430ac8d546c" (UID: "69a72872-af54-439f-b266-7430ac8d546c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 31 19:36:17 addons-715925 kubelet[1266]: I0731 19:36:17.377572    1266 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/69a72872-af54-439f-b266-7430ac8d546c-webhook-cert\") on node \"addons-715925\" DevicePath \"\""
	Jul 31 19:36:17 addons-715925 kubelet[1266]: I0731 19:36:17.377608    1266 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-9tfcx\" (UniqueName: \"kubernetes.io/projected/69a72872-af54-439f-b266-7430ac8d546c-kube-api-access-9tfcx\") on node \"addons-715925\" DevicePath \"\""
	Jul 31 19:36:17 addons-715925 kubelet[1266]: I0731 19:36:17.421488    1266 scope.go:117] "RemoveContainer" containerID="300c19d220e32f101cc67ee0e5dae66f91be689ad88a8e67a9c672e3badc4e33"
	Jul 31 19:36:17 addons-715925 kubelet[1266]: I0731 19:36:17.441415    1266 scope.go:117] "RemoveContainer" containerID="300c19d220e32f101cc67ee0e5dae66f91be689ad88a8e67a9c672e3badc4e33"
	Jul 31 19:36:17 addons-715925 kubelet[1266]: E0731 19:36:17.442046    1266 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"300c19d220e32f101cc67ee0e5dae66f91be689ad88a8e67a9c672e3badc4e33\": container with ID starting with 300c19d220e32f101cc67ee0e5dae66f91be689ad88a8e67a9c672e3badc4e33 not found: ID does not exist" containerID="300c19d220e32f101cc67ee0e5dae66f91be689ad88a8e67a9c672e3badc4e33"
	Jul 31 19:36:17 addons-715925 kubelet[1266]: I0731 19:36:17.442117    1266 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"300c19d220e32f101cc67ee0e5dae66f91be689ad88a8e67a9c672e3badc4e33"} err="failed to get container status \"300c19d220e32f101cc67ee0e5dae66f91be689ad88a8e67a9c672e3badc4e33\": rpc error: code = NotFound desc = could not find container \"300c19d220e32f101cc67ee0e5dae66f91be689ad88a8e67a9c672e3badc4e33\": container with ID starting with 300c19d220e32f101cc67ee0e5dae66f91be689ad88a8e67a9c672e3badc4e33 not found: ID does not exist"
	Jul 31 19:36:19 addons-715925 kubelet[1266]: I0731 19:36:19.171251    1266 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jul 31 19:36:19 addons-715925 kubelet[1266]: I0731 19:36:19.175273    1266 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69a72872-af54-439f-b266-7430ac8d546c" path="/var/lib/kubelet/pods/69a72872-af54-439f-b266-7430ac8d546c/volumes"
	
	
	==> storage-provisioner [09c88c0d9b413855503bc52c539befd82c696445beca7b2ce89e20c13859c542] <==
	I0731 19:30:08.671806       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 19:30:08.694463       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 19:30:08.695923       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 19:30:08.779586       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 19:30:08.779741       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-715925_02836a47-2513-4a36-9ad5-e52438ae791c!
	I0731 19:30:08.780688       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"adf40b22-8d5f-44f6-92d5-499e9a40e228", APIVersion:"v1", ResourceVersion:"792", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-715925_02836a47-2513-4a36-9ad5-e52438ae791c became leader
	I0731 19:30:08.880823       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-715925_02836a47-2513-4a36-9ad5-e52438ae791c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-715925 -n addons-715925
helpers_test.go:261: (dbg) Run:  kubectl --context addons-715925 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (153.18s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (358.48s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 4.723897ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-s4tts" [16f96003-84b9-4f23-a5c6-b1f5047bf0f7] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005023234s
addons_test.go:417: (dbg) Run:  kubectl --context addons-715925 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-715925 top pods -n kube-system: exit status 1 (67.846283ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-fzb4m, age: 3m23.177577593s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-715925 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-715925 top pods -n kube-system: exit status 1 (66.431665ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-fzb4m, age: 3m26.164384429s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-715925 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-715925 top pods -n kube-system: exit status 1 (75.724933ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-fzb4m, age: 3m32.020039859s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-715925 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-715925 top pods -n kube-system: exit status 1 (78.18797ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-fzb4m, age: 3m41.177677532s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-715925 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-715925 top pods -n kube-system: exit status 1 (66.692362ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-fzb4m, age: 3m53.029056373s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-715925 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-715925 top pods -n kube-system: exit status 1 (60.643174ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-fzb4m, age: 4m11.192327253s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-715925 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-715925 top pods -n kube-system: exit status 1 (63.507307ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-fzb4m, age: 4m45.006321117s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-715925 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-715925 top pods -n kube-system: exit status 1 (63.113799ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-fzb4m, age: 5m31.787275504s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-715925 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-715925 top pods -n kube-system: exit status 1 (61.824738ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-fzb4m, age: 6m45.016988354s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-715925 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-715925 top pods -n kube-system: exit status 1 (67.315199ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-fzb4m, age: 8m6.943602493s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-715925 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-715925 top pods -n kube-system: exit status 1 (64.212668ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-fzb4m, age: 9m12.924933075s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-715925 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-715925 -n addons-715925
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-715925 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-715925 logs -n 25: (1.265021059s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-373672                                                                     | download-only-373672 | jenkins | v1.33.1 | 31 Jul 24 19:29 UTC | 31 Jul 24 19:29 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-281803 | jenkins | v1.33.1 | 31 Jul 24 19:29 UTC |                     |
	|         | binary-mirror-281803                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:37353                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-281803                                                                     | binary-mirror-281803 | jenkins | v1.33.1 | 31 Jul 24 19:29 UTC | 31 Jul 24 19:29 UTC |
	| addons  | enable dashboard -p                                                                         | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:29 UTC |                     |
	|         | addons-715925                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:29 UTC |                     |
	|         | addons-715925                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-715925 --wait=true                                                                | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:29 UTC | 31 Jul 24 19:32 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-715925 addons disable                                                                | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:32 UTC | 31 Jul 24 19:32 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-715925 addons disable                                                                | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:32 UTC | 31 Jul 24 19:33 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:33 UTC | 31 Jul 24 19:33 UTC |
	|         | -p addons-715925                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-715925 ssh cat                                                                       | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:33 UTC | 31 Jul 24 19:33 UTC |
	|         | /opt/local-path-provisioner/pvc-7abc566a-0469-49d9-9aef-8963a9d00867_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-715925 addons disable                                                                | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:33 UTC | 31 Jul 24 19:33 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-715925 ip                                                                            | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:33 UTC | 31 Jul 24 19:33 UTC |
	| addons  | addons-715925 addons disable                                                                | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:33 UTC | 31 Jul 24 19:33 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:33 UTC | 31 Jul 24 19:33 UTC |
	|         | addons-715925                                                                               |                      |         |         |                     |                     |
	| addons  | addons-715925 addons disable                                                                | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:33 UTC | 31 Jul 24 19:33 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:33 UTC | 31 Jul 24 19:33 UTC |
	|         | -p addons-715925                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-715925 addons                                                                        | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:33 UTC | 31 Jul 24 19:33 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-715925 addons                                                                        | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:33 UTC | 31 Jul 24 19:33 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:33 UTC | 31 Jul 24 19:33 UTC |
	|         | addons-715925                                                                               |                      |         |         |                     |                     |
	| addons  | addons-715925 addons disable                                                                | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:33 UTC | 31 Jul 24 19:33 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-715925 ssh curl -s                                                                   | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:34 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-715925 ip                                                                            | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:36 UTC | 31 Jul 24 19:36 UTC |
	| addons  | addons-715925 addons disable                                                                | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:36 UTC | 31 Jul 24 19:36 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-715925 addons disable                                                                | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:36 UTC | 31 Jul 24 19:36 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-715925 addons                                                                        | addons-715925        | jenkins | v1.33.1 | 31 Jul 24 19:39 UTC | 31 Jul 24 19:39 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 19:29:04
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 19:29:04.417251  130103 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:29:04.417370  130103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:29:04.417382  130103 out.go:304] Setting ErrFile to fd 2...
	I0731 19:29:04.417389  130103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:29:04.417595  130103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 19:29:04.418243  130103 out.go:298] Setting JSON to false
	I0731 19:29:04.419592  130103 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4280,"bootTime":1722449864,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 19:29:04.419661  130103 start.go:139] virtualization: kvm guest
	I0731 19:29:04.421751  130103 out.go:177] * [addons-715925] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 19:29:04.423095  130103 notify.go:220] Checking for updates...
	I0731 19:29:04.423108  130103 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 19:29:04.424472  130103 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 19:29:04.425899  130103 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 19:29:04.427241  130103 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 19:29:04.428556  130103 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 19:29:04.429886  130103 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 19:29:04.431272  130103 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 19:29:04.463327  130103 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 19:29:04.464717  130103 start.go:297] selected driver: kvm2
	I0731 19:29:04.464732  130103 start.go:901] validating driver "kvm2" against <nil>
	I0731 19:29:04.464744  130103 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 19:29:04.465508  130103 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:29:04.465582  130103 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-121704/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 19:29:04.480999  130103 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 19:29:04.481056  130103 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 19:29:04.481303  130103 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 19:29:04.481395  130103 cni.go:84] Creating CNI manager for ""
	I0731 19:29:04.481414  130103 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 19:29:04.481424  130103 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 19:29:04.481503  130103 start.go:340] cluster config:
	{Name:addons-715925 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-715925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:29:04.481623  130103 iso.go:125] acquiring lock: {Name:mk69fc0fe37180b19ce91307b5e12ab2d0bd69fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:29:04.483599  130103 out.go:177] * Starting "addons-715925" primary control-plane node in "addons-715925" cluster
	I0731 19:29:04.484961  130103 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 19:29:04.485000  130103 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 19:29:04.485013  130103 cache.go:56] Caching tarball of preloaded images
	I0731 19:29:04.485102  130103 preload.go:172] Found /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 19:29:04.485130  130103 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 19:29:04.485499  130103 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/config.json ...
	I0731 19:29:04.485525  130103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/config.json: {Name:mk727355046b816e37cdce50043b5ec4432c4fe4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:29:04.485709  130103 start.go:360] acquireMachinesLock for addons-715925: {Name:mk0ee20c9dba367bb5e62f6affdfe6f589095d2a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 19:29:04.485769  130103 start.go:364] duration metric: took 44.002µs to acquireMachinesLock for "addons-715925"
	I0731 19:29:04.485792  130103 start.go:93] Provisioning new machine with config: &{Name:addons-715925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-715925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 19:29:04.485873  130103 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 19:29:04.487578  130103 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0731 19:29:04.487760  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:04.487812  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:04.502642  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41313
	I0731 19:29:04.503097  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:04.503685  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:04.503708  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:04.504086  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:04.504301  130103 main.go:141] libmachine: (addons-715925) Calling .GetMachineName
	I0731 19:29:04.504446  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:04.504802  130103 start.go:159] libmachine.API.Create for "addons-715925" (driver="kvm2")
	I0731 19:29:04.504853  130103 client.go:168] LocalClient.Create starting
	I0731 19:29:04.504895  130103 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem
	I0731 19:29:04.680626  130103 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem
	I0731 19:29:04.776070  130103 main.go:141] libmachine: Running pre-create checks...
	I0731 19:29:04.776094  130103 main.go:141] libmachine: (addons-715925) Calling .PreCreateCheck
	I0731 19:29:04.776623  130103 main.go:141] libmachine: (addons-715925) Calling .GetConfigRaw
	I0731 19:29:04.777047  130103 main.go:141] libmachine: Creating machine...
	I0731 19:29:04.777060  130103 main.go:141] libmachine: (addons-715925) Calling .Create
	I0731 19:29:04.777193  130103 main.go:141] libmachine: (addons-715925) Creating KVM machine...
	I0731 19:29:04.778711  130103 main.go:141] libmachine: (addons-715925) DBG | found existing default KVM network
	I0731 19:29:04.779877  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:04.779717  130125 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012fad0}
	I0731 19:29:04.779937  130103 main.go:141] libmachine: (addons-715925) DBG | created network xml: 
	I0731 19:29:04.779956  130103 main.go:141] libmachine: (addons-715925) DBG | <network>
	I0731 19:29:04.779967  130103 main.go:141] libmachine: (addons-715925) DBG |   <name>mk-addons-715925</name>
	I0731 19:29:04.779978  130103 main.go:141] libmachine: (addons-715925) DBG |   <dns enable='no'/>
	I0731 19:29:04.779987  130103 main.go:141] libmachine: (addons-715925) DBG |   
	I0731 19:29:04.779996  130103 main.go:141] libmachine: (addons-715925) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0731 19:29:04.780008  130103 main.go:141] libmachine: (addons-715925) DBG |     <dhcp>
	I0731 19:29:04.780013  130103 main.go:141] libmachine: (addons-715925) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0731 19:29:04.780022  130103 main.go:141] libmachine: (addons-715925) DBG |     </dhcp>
	I0731 19:29:04.780030  130103 main.go:141] libmachine: (addons-715925) DBG |   </ip>
	I0731 19:29:04.780054  130103 main.go:141] libmachine: (addons-715925) DBG |   
	I0731 19:29:04.780071  130103 main.go:141] libmachine: (addons-715925) DBG | </network>
	I0731 19:29:04.780110  130103 main.go:141] libmachine: (addons-715925) DBG | 
	I0731 19:29:04.785443  130103 main.go:141] libmachine: (addons-715925) DBG | trying to create private KVM network mk-addons-715925 192.168.39.0/24...
	I0731 19:29:04.850545  130103 main.go:141] libmachine: (addons-715925) Setting up store path in /home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925 ...
	I0731 19:29:04.850579  130103 main.go:141] libmachine: (addons-715925) Building disk image from file:///home/jenkins/minikube-integration/19355-121704/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0731 19:29:04.850601  130103 main.go:141] libmachine: (addons-715925) DBG | private KVM network mk-addons-715925 192.168.39.0/24 created
	I0731 19:29:04.850629  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:04.850489  130125 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 19:29:04.850679  130103 main.go:141] libmachine: (addons-715925) Downloading /home/jenkins/minikube-integration/19355-121704/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19355-121704/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0731 19:29:05.139561  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:05.139426  130125 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa...
	I0731 19:29:05.268204  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:05.268030  130125 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/addons-715925.rawdisk...
	I0731 19:29:05.268240  130103 main.go:141] libmachine: (addons-715925) DBG | Writing magic tar header
	I0731 19:29:05.268254  130103 main.go:141] libmachine: (addons-715925) DBG | Writing SSH key tar header
	I0731 19:29:05.268267  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:05.268158  130125 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925 ...
	I0731 19:29:05.268292  130103 main.go:141] libmachine: (addons-715925) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925
	I0731 19:29:05.268309  130103 main.go:141] libmachine: (addons-715925) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704/.minikube/machines
	I0731 19:29:05.268321  130103 main.go:141] libmachine: (addons-715925) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925 (perms=drwx------)
	I0731 19:29:05.268359  130103 main.go:141] libmachine: (addons-715925) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 19:29:05.268437  130103 main.go:141] libmachine: (addons-715925) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704/.minikube/machines (perms=drwxr-xr-x)
	I0731 19:29:05.268453  130103 main.go:141] libmachine: (addons-715925) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704
	I0731 19:29:05.268489  130103 main.go:141] libmachine: (addons-715925) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 19:29:05.268505  130103 main.go:141] libmachine: (addons-715925) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704/.minikube (perms=drwxr-xr-x)
	I0731 19:29:05.268514  130103 main.go:141] libmachine: (addons-715925) DBG | Checking permissions on dir: /home/jenkins
	I0731 19:29:05.268529  130103 main.go:141] libmachine: (addons-715925) DBG | Checking permissions on dir: /home
	I0731 19:29:05.268543  130103 main.go:141] libmachine: (addons-715925) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704 (perms=drwxrwxr-x)
	I0731 19:29:05.268555  130103 main.go:141] libmachine: (addons-715925) DBG | Skipping /home - not owner
	I0731 19:29:05.268576  130103 main.go:141] libmachine: (addons-715925) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 19:29:05.268589  130103 main.go:141] libmachine: (addons-715925) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 19:29:05.268606  130103 main.go:141] libmachine: (addons-715925) Creating domain...
	I0731 19:29:05.269420  130103 main.go:141] libmachine: (addons-715925) define libvirt domain using xml: 
	I0731 19:29:05.269451  130103 main.go:141] libmachine: (addons-715925) <domain type='kvm'>
	I0731 19:29:05.269462  130103 main.go:141] libmachine: (addons-715925)   <name>addons-715925</name>
	I0731 19:29:05.269473  130103 main.go:141] libmachine: (addons-715925)   <memory unit='MiB'>4000</memory>
	I0731 19:29:05.269482  130103 main.go:141] libmachine: (addons-715925)   <vcpu>2</vcpu>
	I0731 19:29:05.269488  130103 main.go:141] libmachine: (addons-715925)   <features>
	I0731 19:29:05.269497  130103 main.go:141] libmachine: (addons-715925)     <acpi/>
	I0731 19:29:05.269506  130103 main.go:141] libmachine: (addons-715925)     <apic/>
	I0731 19:29:05.269517  130103 main.go:141] libmachine: (addons-715925)     <pae/>
	I0731 19:29:05.269525  130103 main.go:141] libmachine: (addons-715925)     
	I0731 19:29:05.269535  130103 main.go:141] libmachine: (addons-715925)   </features>
	I0731 19:29:05.269545  130103 main.go:141] libmachine: (addons-715925)   <cpu mode='host-passthrough'>
	I0731 19:29:05.269576  130103 main.go:141] libmachine: (addons-715925)   
	I0731 19:29:05.269597  130103 main.go:141] libmachine: (addons-715925)   </cpu>
	I0731 19:29:05.269607  130103 main.go:141] libmachine: (addons-715925)   <os>
	I0731 19:29:05.269616  130103 main.go:141] libmachine: (addons-715925)     <type>hvm</type>
	I0731 19:29:05.269626  130103 main.go:141] libmachine: (addons-715925)     <boot dev='cdrom'/>
	I0731 19:29:05.269641  130103 main.go:141] libmachine: (addons-715925)     <boot dev='hd'/>
	I0731 19:29:05.269654  130103 main.go:141] libmachine: (addons-715925)     <bootmenu enable='no'/>
	I0731 19:29:05.269665  130103 main.go:141] libmachine: (addons-715925)   </os>
	I0731 19:29:05.269677  130103 main.go:141] libmachine: (addons-715925)   <devices>
	I0731 19:29:05.269686  130103 main.go:141] libmachine: (addons-715925)     <disk type='file' device='cdrom'>
	I0731 19:29:05.269704  130103 main.go:141] libmachine: (addons-715925)       <source file='/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/boot2docker.iso'/>
	I0731 19:29:05.269727  130103 main.go:141] libmachine: (addons-715925)       <target dev='hdc' bus='scsi'/>
	I0731 19:29:05.269739  130103 main.go:141] libmachine: (addons-715925)       <readonly/>
	I0731 19:29:05.269750  130103 main.go:141] libmachine: (addons-715925)     </disk>
	I0731 19:29:05.269763  130103 main.go:141] libmachine: (addons-715925)     <disk type='file' device='disk'>
	I0731 19:29:05.269791  130103 main.go:141] libmachine: (addons-715925)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 19:29:05.269807  130103 main.go:141] libmachine: (addons-715925)       <source file='/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/addons-715925.rawdisk'/>
	I0731 19:29:05.269822  130103 main.go:141] libmachine: (addons-715925)       <target dev='hda' bus='virtio'/>
	I0731 19:29:05.269832  130103 main.go:141] libmachine: (addons-715925)     </disk>
	I0731 19:29:05.269843  130103 main.go:141] libmachine: (addons-715925)     <interface type='network'>
	I0731 19:29:05.269854  130103 main.go:141] libmachine: (addons-715925)       <source network='mk-addons-715925'/>
	I0731 19:29:05.269864  130103 main.go:141] libmachine: (addons-715925)       <model type='virtio'/>
	I0731 19:29:05.269874  130103 main.go:141] libmachine: (addons-715925)     </interface>
	I0731 19:29:05.269889  130103 main.go:141] libmachine: (addons-715925)     <interface type='network'>
	I0731 19:29:05.269906  130103 main.go:141] libmachine: (addons-715925)       <source network='default'/>
	I0731 19:29:05.269919  130103 main.go:141] libmachine: (addons-715925)       <model type='virtio'/>
	I0731 19:29:05.269929  130103 main.go:141] libmachine: (addons-715925)     </interface>
	I0731 19:29:05.269938  130103 main.go:141] libmachine: (addons-715925)     <serial type='pty'>
	I0731 19:29:05.269948  130103 main.go:141] libmachine: (addons-715925)       <target port='0'/>
	I0731 19:29:05.269956  130103 main.go:141] libmachine: (addons-715925)     </serial>
	I0731 19:29:05.269972  130103 main.go:141] libmachine: (addons-715925)     <console type='pty'>
	I0731 19:29:05.269983  130103 main.go:141] libmachine: (addons-715925)       <target type='serial' port='0'/>
	I0731 19:29:05.269993  130103 main.go:141] libmachine: (addons-715925)     </console>
	I0731 19:29:05.270005  130103 main.go:141] libmachine: (addons-715925)     <rng model='virtio'>
	I0731 19:29:05.270017  130103 main.go:141] libmachine: (addons-715925)       <backend model='random'>/dev/random</backend>
	I0731 19:29:05.270027  130103 main.go:141] libmachine: (addons-715925)     </rng>
	I0731 19:29:05.270048  130103 main.go:141] libmachine: (addons-715925)     
	I0731 19:29:05.270058  130103 main.go:141] libmachine: (addons-715925)     
	I0731 19:29:05.270064  130103 main.go:141] libmachine: (addons-715925)   </devices>
	I0731 19:29:05.270072  130103 main.go:141] libmachine: (addons-715925) </domain>
	I0731 19:29:05.270082  130103 main.go:141] libmachine: (addons-715925) 
	I0731 19:29:05.276041  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:21:b5:e6 in network default
	I0731 19:29:05.276617  130103 main.go:141] libmachine: (addons-715925) Ensuring networks are active...
	I0731 19:29:05.276637  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:05.277270  130103 main.go:141] libmachine: (addons-715925) Ensuring network default is active
	I0731 19:29:05.277574  130103 main.go:141] libmachine: (addons-715925) Ensuring network mk-addons-715925 is active
	I0731 19:29:05.278004  130103 main.go:141] libmachine: (addons-715925) Getting domain xml...
	I0731 19:29:05.278544  130103 main.go:141] libmachine: (addons-715925) Creating domain...
	I0731 19:29:06.674479  130103 main.go:141] libmachine: (addons-715925) Waiting to get IP...
	I0731 19:29:06.675364  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:06.675850  130103 main.go:141] libmachine: (addons-715925) DBG | unable to find current IP address of domain addons-715925 in network mk-addons-715925
	I0731 19:29:06.675926  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:06.675865  130125 retry.go:31] will retry after 271.598681ms: waiting for machine to come up
	I0731 19:29:06.949597  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:06.950140  130103 main.go:141] libmachine: (addons-715925) DBG | unable to find current IP address of domain addons-715925 in network mk-addons-715925
	I0731 19:29:06.950171  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:06.950093  130125 retry.go:31] will retry after 283.757518ms: waiting for machine to come up
	I0731 19:29:07.235357  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:07.235799  130103 main.go:141] libmachine: (addons-715925) DBG | unable to find current IP address of domain addons-715925 in network mk-addons-715925
	I0731 19:29:07.235822  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:07.235733  130125 retry.go:31] will retry after 434.066918ms: waiting for machine to come up
	I0731 19:29:07.671315  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:07.671715  130103 main.go:141] libmachine: (addons-715925) DBG | unable to find current IP address of domain addons-715925 in network mk-addons-715925
	I0731 19:29:07.671742  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:07.671674  130125 retry.go:31] will retry after 454.225101ms: waiting for machine to come up
	I0731 19:29:08.128266  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:08.128670  130103 main.go:141] libmachine: (addons-715925) DBG | unable to find current IP address of domain addons-715925 in network mk-addons-715925
	I0731 19:29:08.128695  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:08.128624  130125 retry.go:31] will retry after 459.247068ms: waiting for machine to come up
	I0731 19:29:08.589185  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:08.589684  130103 main.go:141] libmachine: (addons-715925) DBG | unable to find current IP address of domain addons-715925 in network mk-addons-715925
	I0731 19:29:08.589728  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:08.589665  130125 retry.go:31] will retry after 575.376406ms: waiting for machine to come up
	I0731 19:29:09.166332  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:09.166742  130103 main.go:141] libmachine: (addons-715925) DBG | unable to find current IP address of domain addons-715925 in network mk-addons-715925
	I0731 19:29:09.166768  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:09.166686  130125 retry.go:31] will retry after 965.991268ms: waiting for machine to come up
	I0731 19:29:10.134425  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:10.134903  130103 main.go:141] libmachine: (addons-715925) DBG | unable to find current IP address of domain addons-715925 in network mk-addons-715925
	I0731 19:29:10.134923  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:10.134872  130125 retry.go:31] will retry after 1.368485162s: waiting for machine to come up
	I0731 19:29:11.505444  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:11.505827  130103 main.go:141] libmachine: (addons-715925) DBG | unable to find current IP address of domain addons-715925 in network mk-addons-715925
	I0731 19:29:11.505849  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:11.505798  130125 retry.go:31] will retry after 1.510757371s: waiting for machine to come up
	I0731 19:29:13.018418  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:13.018855  130103 main.go:141] libmachine: (addons-715925) DBG | unable to find current IP address of domain addons-715925 in network mk-addons-715925
	I0731 19:29:13.018884  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:13.018781  130125 retry.go:31] will retry after 1.809878449s: waiting for machine to come up
	I0731 19:29:14.830581  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:14.831044  130103 main.go:141] libmachine: (addons-715925) DBG | unable to find current IP address of domain addons-715925 in network mk-addons-715925
	I0731 19:29:14.831074  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:14.830987  130125 retry.go:31] will retry after 2.137587319s: waiting for machine to come up
	I0731 19:29:16.971122  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:16.971484  130103 main.go:141] libmachine: (addons-715925) DBG | unable to find current IP address of domain addons-715925 in network mk-addons-715925
	I0731 19:29:16.971503  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:16.971446  130125 retry.go:31] will retry after 2.933911969s: waiting for machine to come up
	I0731 19:29:19.907193  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:19.907671  130103 main.go:141] libmachine: (addons-715925) DBG | unable to find current IP address of domain addons-715925 in network mk-addons-715925
	I0731 19:29:19.907699  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:19.907619  130125 retry.go:31] will retry after 3.252960875s: waiting for machine to come up
	I0731 19:29:23.163952  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:23.164444  130103 main.go:141] libmachine: (addons-715925) DBG | unable to find current IP address of domain addons-715925 in network mk-addons-715925
	I0731 19:29:23.164472  130103 main.go:141] libmachine: (addons-715925) DBG | I0731 19:29:23.164334  130125 retry.go:31] will retry after 4.321243048s: waiting for machine to come up
	I0731 19:29:27.488876  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:27.489438  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has current primary IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:27.489460  130103 main.go:141] libmachine: (addons-715925) Found IP for machine: 192.168.39.147
	I0731 19:29:27.489473  130103 main.go:141] libmachine: (addons-715925) Reserving static IP address...
	I0731 19:29:27.490026  130103 main.go:141] libmachine: (addons-715925) DBG | unable to find host DHCP lease matching {name: "addons-715925", mac: "52:54:00:6d:64:ee", ip: "192.168.39.147"} in network mk-addons-715925
	I0731 19:29:27.561180  130103 main.go:141] libmachine: (addons-715925) DBG | Getting to WaitForSSH function...
	I0731 19:29:27.561215  130103 main.go:141] libmachine: (addons-715925) Reserved static IP address: 192.168.39.147
	I0731 19:29:27.561230  130103 main.go:141] libmachine: (addons-715925) Waiting for SSH to be available...
	I0731 19:29:27.563779  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:27.564311  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:27.564339  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:27.564506  130103 main.go:141] libmachine: (addons-715925) DBG | Using SSH client type: external
	I0731 19:29:27.564561  130103 main.go:141] libmachine: (addons-715925) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa (-rw-------)
	I0731 19:29:27.564599  130103 main.go:141] libmachine: (addons-715925) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.147 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 19:29:27.564612  130103 main.go:141] libmachine: (addons-715925) DBG | About to run SSH command:
	I0731 19:29:27.564633  130103 main.go:141] libmachine: (addons-715925) DBG | exit 0
	I0731 19:29:27.689829  130103 main.go:141] libmachine: (addons-715925) DBG | SSH cmd err, output: <nil>: 
	I0731 19:29:27.690058  130103 main.go:141] libmachine: (addons-715925) KVM machine creation complete!
	I0731 19:29:27.690402  130103 main.go:141] libmachine: (addons-715925) Calling .GetConfigRaw
	I0731 19:29:27.690966  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:27.691153  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:27.691310  130103 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 19:29:27.691325  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:27.692577  130103 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 19:29:27.692608  130103 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 19:29:27.692617  130103 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 19:29:27.692629  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:27.694881  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:27.695240  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:27.695268  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:27.695374  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:27.695552  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:27.695697  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:27.695805  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:27.695952  130103 main.go:141] libmachine: Using SSH client type: native
	I0731 19:29:27.696157  130103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0731 19:29:27.696167  130103 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 19:29:27.800622  130103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 19:29:27.800652  130103 main.go:141] libmachine: Detecting the provisioner...
	I0731 19:29:27.800671  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:27.803597  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:27.804002  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:27.804035  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:27.804227  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:27.804453  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:27.804633  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:27.804909  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:27.805092  130103 main.go:141] libmachine: Using SSH client type: native
	I0731 19:29:27.805262  130103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0731 19:29:27.805274  130103 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 19:29:27.910394  130103 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 19:29:27.910483  130103 main.go:141] libmachine: found compatible host: buildroot
	I0731 19:29:27.910490  130103 main.go:141] libmachine: Provisioning with buildroot...
	I0731 19:29:27.910498  130103 main.go:141] libmachine: (addons-715925) Calling .GetMachineName
	I0731 19:29:27.910759  130103 buildroot.go:166] provisioning hostname "addons-715925"
	I0731 19:29:27.910784  130103 main.go:141] libmachine: (addons-715925) Calling .GetMachineName
	I0731 19:29:27.910982  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:27.913314  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:27.913634  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:27.913662  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:27.913738  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:27.913911  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:27.914077  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:27.914230  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:27.914430  130103 main.go:141] libmachine: Using SSH client type: native
	I0731 19:29:27.914672  130103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0731 19:29:27.914689  130103 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-715925 && echo "addons-715925" | sudo tee /etc/hostname
	I0731 19:29:28.039498  130103 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-715925
	
	I0731 19:29:28.039526  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:28.042484  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.042888  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:28.042936  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.043163  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:28.043373  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:28.043556  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:28.043736  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:28.043956  130103 main.go:141] libmachine: Using SSH client type: native
	I0731 19:29:28.044166  130103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0731 19:29:28.044190  130103 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-715925' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-715925/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-715925' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 19:29:28.158317  130103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 19:29:28.158349  130103 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 19:29:28.158392  130103 buildroot.go:174] setting up certificates
	I0731 19:29:28.158407  130103 provision.go:84] configureAuth start
	I0731 19:29:28.158420  130103 main.go:141] libmachine: (addons-715925) Calling .GetMachineName
	I0731 19:29:28.158726  130103 main.go:141] libmachine: (addons-715925) Calling .GetIP
	I0731 19:29:28.161183  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.161550  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:28.161578  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.161728  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:28.163593  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.163973  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:28.163999  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.164114  130103 provision.go:143] copyHostCerts
	I0731 19:29:28.164207  130103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 19:29:28.164333  130103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 19:29:28.164395  130103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 19:29:28.164440  130103 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.addons-715925 san=[127.0.0.1 192.168.39.147 addons-715925 localhost minikube]
	I0731 19:29:28.330547  130103 provision.go:177] copyRemoteCerts
	I0731 19:29:28.330611  130103 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 19:29:28.330647  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:28.333106  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.333418  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:28.333453  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.333621  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:28.333811  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:28.333991  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:28.334098  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:28.415331  130103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 19:29:28.439433  130103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 19:29:28.462891  130103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 19:29:28.485694  130103 provision.go:87] duration metric: took 327.271478ms to configureAuth
	I0731 19:29:28.485725  130103 buildroot.go:189] setting minikube options for container-runtime
	I0731 19:29:28.485913  130103 config.go:182] Loaded profile config "addons-715925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:29:28.486007  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:28.488338  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.488692  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:28.488719  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.488875  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:28.489084  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:28.489268  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:28.489469  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:28.489644  130103 main.go:141] libmachine: Using SSH client type: native
	I0731 19:29:28.489806  130103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0731 19:29:28.489821  130103 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 19:29:28.746688  130103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 19:29:28.746712  130103 main.go:141] libmachine: Checking connection to Docker...
	I0731 19:29:28.746720  130103 main.go:141] libmachine: (addons-715925) Calling .GetURL
	I0731 19:29:28.747938  130103 main.go:141] libmachine: (addons-715925) DBG | Using libvirt version 6000000
	I0731 19:29:28.750067  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.750386  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:28.750405  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.750584  130103 main.go:141] libmachine: Docker is up and running!
	I0731 19:29:28.750601  130103 main.go:141] libmachine: Reticulating splines...
	I0731 19:29:28.750608  130103 client.go:171] duration metric: took 24.245744327s to LocalClient.Create
	I0731 19:29:28.750649  130103 start.go:167] duration metric: took 24.245847855s to libmachine.API.Create "addons-715925"
	I0731 19:29:28.750668  130103 start.go:293] postStartSetup for "addons-715925" (driver="kvm2")
	I0731 19:29:28.750680  130103 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 19:29:28.750697  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:28.750950  130103 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 19:29:28.750975  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:28.753095  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.753421  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:28.753447  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.753585  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:28.753766  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:28.753918  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:28.754026  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:28.835632  130103 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 19:29:28.840018  130103 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 19:29:28.840049  130103 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 19:29:28.840129  130103 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 19:29:28.840161  130103 start.go:296] duration metric: took 89.484265ms for postStartSetup
	I0731 19:29:28.840202  130103 main.go:141] libmachine: (addons-715925) Calling .GetConfigRaw
	I0731 19:29:28.840798  130103 main.go:141] libmachine: (addons-715925) Calling .GetIP
	I0731 19:29:28.843150  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.843459  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:28.843490  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.843690  130103 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/config.json ...
	I0731 19:29:28.843903  130103 start.go:128] duration metric: took 24.358018339s to createHost
	I0731 19:29:28.843932  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:28.846455  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.846779  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:28.846805  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.846924  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:28.847164  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:28.847378  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:28.847487  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:28.847777  130103 main.go:141] libmachine: Using SSH client type: native
	I0731 19:29:28.847922  130103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0731 19:29:28.847931  130103 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 19:29:28.950058  130103 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722454168.925862125
	
	I0731 19:29:28.950080  130103 fix.go:216] guest clock: 1722454168.925862125
	I0731 19:29:28.950087  130103 fix.go:229] Guest: 2024-07-31 19:29:28.925862125 +0000 UTC Remote: 2024-07-31 19:29:28.84391685 +0000 UTC m=+24.461945574 (delta=81.945275ms)
	I0731 19:29:28.950129  130103 fix.go:200] guest clock delta is within tolerance: 81.945275ms
	I0731 19:29:28.950138  130103 start.go:83] releasing machines lock for "addons-715925", held for 24.46435786s
	I0731 19:29:28.950158  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:28.950414  130103 main.go:141] libmachine: (addons-715925) Calling .GetIP
	I0731 19:29:28.952987  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.953321  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:28.953361  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.953501  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:28.953939  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:28.954133  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:28.954239  130103 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 19:29:28.954286  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:28.954363  130103 ssh_runner.go:195] Run: cat /version.json
	I0731 19:29:28.954391  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:28.956845  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.957097  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.957129  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:28.957149  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.957251  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:28.957476  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:28.957491  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:28.957529  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:28.957647  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:28.957718  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:28.957817  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:28.957930  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:28.958059  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:28.958261  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:29.058932  130103 ssh_runner.go:195] Run: systemctl --version
	I0731 19:29:29.065031  130103 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 19:29:29.222642  130103 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 19:29:29.228923  130103 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 19:29:29.228990  130103 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 19:29:29.244261  130103 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 19:29:29.244285  130103 start.go:495] detecting cgroup driver to use...
	I0731 19:29:29.244351  130103 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 19:29:29.259719  130103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 19:29:29.273041  130103 docker.go:217] disabling cri-docker service (if available) ...
	I0731 19:29:29.273093  130103 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 19:29:29.286060  130103 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 19:29:29.298958  130103 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 19:29:29.411567  130103 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 19:29:29.540344  130103 docker.go:233] disabling docker service ...
	I0731 19:29:29.540422  130103 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 19:29:29.555924  130103 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 19:29:29.568381  130103 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 19:29:29.700845  130103 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 19:29:29.819524  130103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 19:29:29.833860  130103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 19:29:29.852510  130103 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 19:29:29.852566  130103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:29:29.862467  130103 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 19:29:29.862541  130103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:29:29.872623  130103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:29:29.882709  130103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:29:29.892895  130103 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 19:29:29.903734  130103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:29:29.913681  130103 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:29:29.930998  130103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:29:29.941538  130103 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 19:29:29.950963  130103 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 19:29:29.951016  130103 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 19:29:29.963304  130103 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 19:29:29.972336  130103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:29:30.078706  130103 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 19:29:30.213190  130103 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 19:29:30.213303  130103 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 19:29:30.218293  130103 start.go:563] Will wait 60s for crictl version
	I0731 19:29:30.218367  130103 ssh_runner.go:195] Run: which crictl
	I0731 19:29:30.222123  130103 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 19:29:30.260938  130103 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 19:29:30.261047  130103 ssh_runner.go:195] Run: crio --version
	I0731 19:29:30.289684  130103 ssh_runner.go:195] Run: crio --version
	I0731 19:29:30.319330  130103 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 19:29:30.320491  130103 main.go:141] libmachine: (addons-715925) Calling .GetIP
	I0731 19:29:30.322838  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:30.323164  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:30.323192  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:30.323401  130103 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 19:29:30.327491  130103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 19:29:30.339852  130103 kubeadm.go:883] updating cluster {Name:addons-715925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-715925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 19:29:30.339962  130103 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 19:29:30.340007  130103 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 19:29:30.372050  130103 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 19:29:30.372118  130103 ssh_runner.go:195] Run: which lz4
	I0731 19:29:30.376128  130103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 19:29:30.380300  130103 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 19:29:30.380329  130103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 19:29:31.770664  130103 crio.go:462] duration metric: took 1.394563738s to copy over tarball
	I0731 19:29:31.770753  130103 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 19:29:34.066094  130103 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.2953118s)
	I0731 19:29:34.066130  130103 crio.go:469] duration metric: took 2.295432134s to extract the tarball
	I0731 19:29:34.066141  130103 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 19:29:34.109244  130103 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 19:29:34.150321  130103 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 19:29:34.150348  130103 cache_images.go:84] Images are preloaded, skipping loading
	I0731 19:29:34.150359  130103 kubeadm.go:934] updating node { 192.168.39.147 8443 v1.30.3 crio true true} ...
	I0731 19:29:34.150508  130103 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-715925 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-715925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 19:29:34.150595  130103 ssh_runner.go:195] Run: crio config
	I0731 19:29:34.197787  130103 cni.go:84] Creating CNI manager for ""
	I0731 19:29:34.197811  130103 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 19:29:34.197824  130103 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 19:29:34.197850  130103 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.147 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-715925 NodeName:addons-715925 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 19:29:34.198038  130103 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-715925"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 19:29:34.198117  130103 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 19:29:34.208277  130103 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 19:29:34.208339  130103 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 19:29:34.217756  130103 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0731 19:29:34.234609  130103 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 19:29:34.250799  130103 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0731 19:29:34.266713  130103 ssh_runner.go:195] Run: grep 192.168.39.147	control-plane.minikube.internal$ /etc/hosts
	I0731 19:29:34.270369  130103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.147	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 19:29:34.281847  130103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:29:34.410999  130103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 19:29:34.428980  130103 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925 for IP: 192.168.39.147
	I0731 19:29:34.429008  130103 certs.go:194] generating shared ca certs ...
	I0731 19:29:34.429031  130103 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:29:34.429206  130103 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 19:29:34.734405  130103 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt ...
	I0731 19:29:34.734432  130103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt: {Name:mk4d5f8eac5af4bed4fe496450a7ef33fb556296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:29:34.734604  130103 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key ...
	I0731 19:29:34.734616  130103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key: {Name:mk4606c3c07cf89342d6e10a5cac72aecafe6804 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:29:34.734685  130103 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 19:29:34.986862  130103 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt ...
	I0731 19:29:34.986892  130103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt: {Name:mk95e510e38e7df0f774b9947d241d17543c0a4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:29:34.987053  130103 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key ...
	I0731 19:29:34.987071  130103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key: {Name:mk85f3a38f86eed75b7fe062aaa793236334658d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:29:34.987137  130103 certs.go:256] generating profile certs ...
	I0731 19:29:34.987200  130103 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.key
	I0731 19:29:34.987214  130103 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt with IP's: []
	I0731 19:29:35.054304  130103 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt ...
	I0731 19:29:35.054338  130103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: {Name:mkc793b360bd473fa37e04348368bff9302c6c4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:29:35.054498  130103 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.key ...
	I0731 19:29:35.054510  130103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.key: {Name:mk18437f70299f073c6f602ddcfbfcda0594a73e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:29:35.054573  130103 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/apiserver.key.b1593293
	I0731 19:29:35.054592  130103 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/apiserver.crt.b1593293 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.147]
	I0731 19:29:35.169660  130103 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/apiserver.crt.b1593293 ...
	I0731 19:29:35.169698  130103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/apiserver.crt.b1593293: {Name:mk4ff9ac7cf283ced725033db8d542a71d850615 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:29:35.169888  130103 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/apiserver.key.b1593293 ...
	I0731 19:29:35.169908  130103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/apiserver.key.b1593293: {Name:mkbbf4be2e519f0905edc297fdbc4c8d4c1c482b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:29:35.170003  130103 certs.go:381] copying /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/apiserver.crt.b1593293 -> /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/apiserver.crt
	I0731 19:29:35.170120  130103 certs.go:385] copying /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/apiserver.key.b1593293 -> /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/apiserver.key
	I0731 19:29:35.170173  130103 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/proxy-client.key
	I0731 19:29:35.170190  130103 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/proxy-client.crt with IP's: []
	I0731 19:29:35.390027  130103 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/proxy-client.crt ...
	I0731 19:29:35.390059  130103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/proxy-client.crt: {Name:mka698132995fe1e592227c9d5a8ad9d6dcfae50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:29:35.390248  130103 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/proxy-client.key ...
	I0731 19:29:35.390265  130103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/proxy-client.key: {Name:mk4a7ee209fc8d27c2805c44e7ee824f61d0fcd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:29:35.390488  130103 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 19:29:35.390532  130103 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 19:29:35.390570  130103 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 19:29:35.390600  130103 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 19:29:35.391226  130103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 19:29:35.418432  130103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 19:29:35.442325  130103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 19:29:35.466460  130103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 19:29:35.490519  130103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0731 19:29:35.514064  130103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 19:29:35.537218  130103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 19:29:35.560272  130103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 19:29:35.583218  130103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 19:29:35.606436  130103 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 19:29:35.623095  130103 ssh_runner.go:195] Run: openssl version
	I0731 19:29:35.628879  130103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 19:29:35.639651  130103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:29:35.643904  130103 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:29:35.643945  130103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:29:35.649490  130103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 19:29:35.659602  130103 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 19:29:35.663531  130103 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 19:29:35.663605  130103 kubeadm.go:392] StartCluster: {Name:addons-715925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-715925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:29:35.663680  130103 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 19:29:35.663720  130103 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 19:29:35.699253  130103 cri.go:89] found id: ""
	I0731 19:29:35.699339  130103 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 19:29:35.709293  130103 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 19:29:35.718238  130103 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 19:29:35.727600  130103 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 19:29:35.727621  130103 kubeadm.go:157] found existing configuration files:
	
	I0731 19:29:35.727662  130103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 19:29:35.736344  130103 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 19:29:35.736399  130103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 19:29:35.745073  130103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 19:29:35.753551  130103 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 19:29:35.753596  130103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 19:29:35.762456  130103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 19:29:35.770654  130103 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 19:29:35.770703  130103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 19:29:35.779513  130103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 19:29:35.787827  130103 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 19:29:35.787889  130103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 19:29:35.796698  130103 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 19:29:35.992220  130103 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 19:29:45.872109  130103 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 19:29:45.872193  130103 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 19:29:45.872285  130103 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 19:29:45.872394  130103 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 19:29:45.872481  130103 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 19:29:45.872569  130103 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 19:29:45.874157  130103 out.go:204]   - Generating certificates and keys ...
	I0731 19:29:45.874261  130103 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 19:29:45.874359  130103 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 19:29:45.874459  130103 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 19:29:45.874533  130103 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 19:29:45.874632  130103 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 19:29:45.874716  130103 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 19:29:45.874767  130103 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 19:29:45.874879  130103 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-715925 localhost] and IPs [192.168.39.147 127.0.0.1 ::1]
	I0731 19:29:45.874952  130103 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 19:29:45.875097  130103 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-715925 localhost] and IPs [192.168.39.147 127.0.0.1 ::1]
	I0731 19:29:45.875188  130103 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 19:29:45.875282  130103 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 19:29:45.875346  130103 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 19:29:45.875425  130103 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 19:29:45.875469  130103 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 19:29:45.875517  130103 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 19:29:45.875562  130103 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 19:29:45.875634  130103 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 19:29:45.875687  130103 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 19:29:45.875752  130103 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 19:29:45.875805  130103 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 19:29:45.877372  130103 out.go:204]   - Booting up control plane ...
	I0731 19:29:45.877474  130103 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 19:29:45.877558  130103 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 19:29:45.877632  130103 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 19:29:45.877743  130103 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 19:29:45.877812  130103 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 19:29:45.877844  130103 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 19:29:45.877957  130103 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 19:29:45.878030  130103 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 19:29:45.878082  130103 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001184953s
	I0731 19:29:45.878160  130103 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 19:29:45.878242  130103 kubeadm.go:310] [api-check] The API server is healthy after 5.00232287s
	I0731 19:29:45.878383  130103 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 19:29:45.878508  130103 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 19:29:45.878565  130103 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 19:29:45.878785  130103 kubeadm.go:310] [mark-control-plane] Marking the node addons-715925 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 19:29:45.878876  130103 kubeadm.go:310] [bootstrap-token] Using token: ule4iw.fyjygud86o13jnep
	I0731 19:29:45.880371  130103 out.go:204]   - Configuring RBAC rules ...
	I0731 19:29:45.880503  130103 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 19:29:45.880602  130103 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 19:29:45.880737  130103 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 19:29:45.880850  130103 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 19:29:45.880949  130103 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 19:29:45.881058  130103 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 19:29:45.881196  130103 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 19:29:45.881259  130103 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 19:29:45.881322  130103 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 19:29:45.881331  130103 kubeadm.go:310] 
	I0731 19:29:45.881442  130103 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 19:29:45.881453  130103 kubeadm.go:310] 
	I0731 19:29:45.881513  130103 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 19:29:45.881519  130103 kubeadm.go:310] 
	I0731 19:29:45.881545  130103 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 19:29:45.881610  130103 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 19:29:45.881690  130103 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 19:29:45.881702  130103 kubeadm.go:310] 
	I0731 19:29:45.881778  130103 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 19:29:45.881787  130103 kubeadm.go:310] 
	I0731 19:29:45.881830  130103 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 19:29:45.881836  130103 kubeadm.go:310] 
	I0731 19:29:45.881879  130103 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 19:29:45.881944  130103 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 19:29:45.882010  130103 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 19:29:45.882019  130103 kubeadm.go:310] 
	I0731 19:29:45.882134  130103 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 19:29:45.882246  130103 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 19:29:45.882254  130103 kubeadm.go:310] 
	I0731 19:29:45.882340  130103 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ule4iw.fyjygud86o13jnep \
	I0731 19:29:45.882428  130103 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:227a67c5218b331dd5f7132ea28d97c9a8049ca33dc9adbb2077964e102f3559 \
	I0731 19:29:45.882459  130103 kubeadm.go:310] 	--control-plane 
	I0731 19:29:45.882474  130103 kubeadm.go:310] 
	I0731 19:29:45.882584  130103 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 19:29:45.882595  130103 kubeadm.go:310] 
	I0731 19:29:45.882662  130103 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ule4iw.fyjygud86o13jnep \
	I0731 19:29:45.882764  130103 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:227a67c5218b331dd5f7132ea28d97c9a8049ca33dc9adbb2077964e102f3559 
	I0731 19:29:45.882776  130103 cni.go:84] Creating CNI manager for ""
	I0731 19:29:45.882783  130103 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 19:29:45.884467  130103 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 19:29:45.885857  130103 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 19:29:45.897393  130103 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 19:29:45.919025  130103 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 19:29:45.919115  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:45.919116  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-715925 minikube.k8s.io/updated_at=2024_07_31T19_29_45_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825 minikube.k8s.io/name=addons-715925 minikube.k8s.io/primary=true
	I0731 19:29:46.045725  130103 ops.go:34] apiserver oom_adj: -16
	I0731 19:29:46.045909  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:46.546185  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:47.046871  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:47.546374  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:48.046035  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:48.546363  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:49.046031  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:49.546335  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:50.046265  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:50.546363  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:51.046970  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:51.546263  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:52.046309  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:52.546576  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:53.046479  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:53.546554  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:54.046653  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:54.546581  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:55.046796  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:55.546805  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:56.046765  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:56.546133  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:57.046263  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:57.546801  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:58.046171  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:58.546059  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:59.046786  130103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:29:59.132969  130103 kubeadm.go:1113] duration metric: took 13.213927952s to wait for elevateKubeSystemPrivileges
	I0731 19:29:59.133015  130103 kubeadm.go:394] duration metric: took 23.469414816s to StartCluster
	I0731 19:29:59.133041  130103 settings.go:142] acquiring lock: {Name:mk1f43113b3e416d694056e4890ac819b020378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:29:59.133177  130103 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 19:29:59.133682  130103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:29:59.133928  130103 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 19:29:59.133948  130103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 19:29:59.133988  130103 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0731 19:29:59.134114  130103 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-715925"
	I0731 19:29:59.134127  130103 addons.go:69] Setting default-storageclass=true in profile "addons-715925"
	I0731 19:29:59.134152  130103 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-715925"
	I0731 19:29:59.134148  130103 addons.go:69] Setting cloud-spanner=true in profile "addons-715925"
	I0731 19:29:59.134156  130103 addons.go:69] Setting metrics-server=true in profile "addons-715925"
	I0731 19:29:59.134165  130103 config.go:182] Loaded profile config "addons-715925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:29:59.134182  130103 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-715925"
	I0731 19:29:59.134195  130103 addons.go:69] Setting gcp-auth=true in profile "addons-715925"
	I0731 19:29:59.134196  130103 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-715925"
	I0731 19:29:59.134209  130103 addons.go:69] Setting ingress=true in profile "addons-715925"
	I0731 19:29:59.134214  130103 mustload.go:65] Loading cluster: addons-715925
	I0731 19:29:59.134220  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:29:59.134228  130103 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-715925"
	I0731 19:29:59.134232  130103 addons.go:69] Setting volumesnapshots=true in profile "addons-715925"
	I0731 19:29:59.134233  130103 addons.go:69] Setting helm-tiller=true in profile "addons-715925"
	I0731 19:29:59.134251  130103 addons.go:234] Setting addon helm-tiller=true in "addons-715925"
	I0731 19:29:59.134251  130103 addons.go:234] Setting addon volumesnapshots=true in "addons-715925"
	I0731 19:29:59.134251  130103 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-715925"
	I0731 19:29:59.134283  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:29:59.134285  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:29:59.134402  130103 config.go:182] Loaded profile config "addons-715925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:29:59.134665  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.134685  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.134715  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.134726  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.134186  130103 addons.go:234] Setting addon cloud-spanner=true in "addons-715925"
	I0731 19:29:59.134748  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.134765  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.134219  130103 addons.go:69] Setting volcano=true in profile "addons-715925"
	I0731 19:29:59.134788  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.134790  130103 addons.go:69] Setting ingress-dns=true in profile "addons-715925"
	I0731 19:29:59.134800  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.134798  130103 addons.go:69] Setting inspektor-gadget=true in profile "addons-715925"
	I0731 19:29:59.134815  130103 addons.go:234] Setting addon ingress-dns=true in "addons-715925"
	I0731 19:29:59.134824  130103 addons.go:234] Setting addon inspektor-gadget=true in "addons-715925"
	I0731 19:29:59.134673  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.134856  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:29:59.134861  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.134955  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.135005  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.134771  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:29:59.134175  130103 addons.go:69] Setting storage-provisioner=true in profile "addons-715925"
	I0731 19:29:59.135289  130103 addons.go:234] Setting addon storage-provisioner=true in "addons-715925"
	I0731 19:29:59.135325  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:29:59.135339  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.135366  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.135496  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.134116  130103 addons.go:69] Setting yakd=true in profile "addons-715925"
	I0731 19:29:59.135515  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.135540  130103 addons.go:234] Setting addon yakd=true in "addons-715925"
	I0731 19:29:59.135569  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:29:59.134224  130103 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-715925"
	I0731 19:29:59.135666  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.135671  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:29:59.135693  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.135923  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.135952  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.136026  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.136045  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.134186  130103 addons.go:234] Setting addon metrics-server=true in "addons-715925"
	I0731 19:29:59.136129  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:29:59.134227  130103 addons.go:234] Setting addon ingress=true in "addons-715925"
	I0731 19:29:59.136203  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:29:59.136535  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.136556  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.146340  130103 out.go:177] * Verifying Kubernetes components...
	I0731 19:29:59.134840  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:29:59.147024  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.147054  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.148274  130103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:29:59.134198  130103 addons.go:69] Setting registry=true in profile "addons-715925"
	I0731 19:29:59.148409  130103 addons.go:234] Setting addon registry=true in "addons-715925"
	I0731 19:29:59.148447  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:29:59.148822  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.148847  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.134839  130103 addons.go:234] Setting addon volcano=true in "addons-715925"
	I0731 19:29:59.149574  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:29:59.149915  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.149951  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.155536  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45079
	I0731 19:29:59.156111  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.156685  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.156708  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.157146  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.157711  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.157745  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.167057  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44013
	I0731 19:29:59.167633  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.168149  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.168171  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.168514  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.169076  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.169114  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.169575  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39017
	I0731 19:29:59.170075  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.170847  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.170865  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.171418  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.172001  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.172034  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.173273  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43621
	I0731 19:29:59.173437  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33607
	I0731 19:29:59.173449  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32835
	I0731 19:29:59.173515  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46811
	I0731 19:29:59.174874  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.174916  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.175124  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.175214  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.175296  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34429
	I0731 19:29:59.175393  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39741
	I0731 19:29:59.175464  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36825
	I0731 19:29:59.175517  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39913
	I0731 19:29:59.175647  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.175677  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.175690  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.176053  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.176111  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.176141  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.176174  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.176296  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.176312  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.176325  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.176580  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.176595  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.176746  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.176756  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.176814  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.176869  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.176910  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.176956  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.177467  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.177576  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.177592  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.177897  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.177938  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.178717  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.178740  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.180665  130103 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-715925"
	I0731 19:29:59.180706  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:29:59.181051  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.181083  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.181688  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.181724  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.181808  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.181905  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.181927  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.182043  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.182058  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.182362  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.182427  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.182476  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.182514  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.183191  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.183260  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.183314  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.183828  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.183857  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.184366  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.184403  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.187466  130103 addons.go:234] Setting addon default-storageclass=true in "addons-715925"
	I0731 19:29:59.187516  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:29:59.187859  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.187879  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.193685  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:29:59.194078  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.194117  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.194939  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46249
	I0731 19:29:59.195412  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.201980  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.202014  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.202755  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.202991  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.204766  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:59.207146  130103 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0731 19:29:59.208813  130103 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0731 19:29:59.210243  130103 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0731 19:29:59.211648  130103 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0731 19:29:59.212996  130103 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0731 19:29:59.213781  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40555
	I0731 19:29:59.213974  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35453
	I0731 19:29:59.214648  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.215318  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.215339  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.215586  130103 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0731 19:29:59.215774  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.215940  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.216504  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41283
	I0731 19:29:59.216589  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.216608  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.217028  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.217388  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.217414  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.217767  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.217791  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.218190  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.218370  130103 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0731 19:29:59.218576  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.221078  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:59.221482  130103 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0731 19:29:59.221897  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.222553  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.222614  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.222812  130103 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0731 19:29:59.222915  130103 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0731 19:29:59.222930  130103 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0731 19:29:59.222953  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:59.223629  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
	I0731 19:29:59.224054  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.224578  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.224597  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.224992  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.225518  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43645
	I0731 19:29:59.225610  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.225647  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.225925  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.226449  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.226476  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.226816  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.227080  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.227306  130103 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0731 19:29:59.227325  130103 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0731 19:29:59.227345  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:59.227352  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.227395  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.228212  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37475
	I0731 19:29:59.228699  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.228988  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33215
	I0731 19:29:59.229315  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35605
	I0731 19:29:59.229502  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.229670  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.229695  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.229693  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:59.229720  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.230097  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.230185  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.230194  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:59.230208  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.230364  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:59.230574  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.230594  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.230614  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:59.230817  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.230859  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:59.231061  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.231480  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37477
	I0731 19:29:59.231651  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.231677  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.231792  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.231969  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.232134  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.232311  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.233373  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.233394  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.233467  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39985
	I0731 19:29:59.233974  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.234053  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.234464  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.234476  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:59.234521  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.234535  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.234509  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.234871  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.234874  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:29:59.234912  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:29:59.235086  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.235139  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:29:59.235160  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:29:59.235188  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:29:59.235201  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:29:59.235209  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:29:59.235651  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:29:59.235657  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:29:59.235678  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	W0731 19:29:59.235781  130103 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0731 19:29:59.238001  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33051
	I0731 19:29:59.238371  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.238847  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.238868  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.239065  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:59.239129  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:59.239235  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.239611  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.239697  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36603
	I0731 19:29:59.240074  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.240552  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.240578  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.240703  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:59.240920  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.241387  130103 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0731 19:29:59.241465  130103 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0731 19:29:59.241500  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.241593  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.241539  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:59.242909  130103 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0731 19:29:59.243508  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0731 19:29:59.243527  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:59.243432  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35097
	I0731 19:29:59.243745  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46251
	I0731 19:29:59.244121  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.244198  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.244611  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.244628  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.244730  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.244743  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.244918  130103 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0731 19:29:59.244933  130103 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0731 19:29:59.244951  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:59.244996  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.244998  130103 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0731 19:29:59.245103  130103 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 19:29:59.245186  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.245834  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:59.245858  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:59.246260  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.247137  130103 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0731 19:29:59.247155  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0731 19:29:59.247176  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:59.247925  130103 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 19:29:59.247940  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 19:29:59.247956  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:59.249410  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:59.251454  130103 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0731 19:29:59.251639  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.251666  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.251685  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:59.251701  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.251797  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:59.251800  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36493
	I0731 19:29:59.252080  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:59.252278  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:59.252429  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:59.252617  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.252650  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:59.252665  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.252847  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:59.253048  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.253142  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:59.253281  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:59.253301  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.253326  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:59.253366  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.253409  130103 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0731 19:29:59.253424  130103 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0731 19:29:59.253443  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:59.253465  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:59.253665  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:59.253714  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:59.253967  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:59.253987  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:59.254005  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.254032  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:59.254096  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:59.254202  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:59.254241  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:59.254325  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.254600  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:59.254784  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:59.255124  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.255139  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.255306  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:59.255499  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:59.255701  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:59.255871  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:59.256108  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.256284  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.256567  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.257189  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:59.257324  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.257771  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:59.257988  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:59.258139  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:59.258324  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:59.258377  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:59.260418  130103 out.go:177]   - Using image docker.io/busybox:stable
	I0731 19:29:59.261814  130103 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0731 19:29:59.263227  130103 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0731 19:29:59.263251  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0731 19:29:59.263273  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:59.266646  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.267090  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:59.267111  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.267302  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:59.267498  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:59.267635  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:59.267774  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:59.268300  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40003
	I0731 19:29:59.268981  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.269740  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.269763  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.270410  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.270725  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.272585  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:59.272607  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35855
	I0731 19:29:59.273132  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.273255  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41839
	I0731 19:29:59.273606  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.273626  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.273740  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.274037  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.274289  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.274311  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.274318  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.274671  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.274741  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42403
	I0731 19:29:59.274820  130103 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0731 19:29:59.274890  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.275201  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.275677  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.275699  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.276075  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.276242  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:59.276871  130103 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0731 19:29:59.276890  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0731 19:29:59.276909  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:59.276993  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:59.277022  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:59.277199  130103 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 19:29:59.277210  130103 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 19:29:59.277225  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:59.278949  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42855
	I0731 19:29:59.279665  130103 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 19:29:59.279747  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.280342  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.280361  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.280799  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.280923  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.281251  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:59.281277  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.281373  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.281426  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.282352  130103 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 19:29:59.283702  130103 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0731 19:29:59.283971  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:59.283985  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43415
	I0731 19:29:59.283995  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:59.284010  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:59.284016  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.284039  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:59.284179  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:59.284260  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:59.284351  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:59.284378  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:59.284495  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:59.284495  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:59.285109  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.285512  130103 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0731 19:29:59.285747  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.285771  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.285868  130103 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0731 19:29:59.285888  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0731 19:29:59.285905  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:59.286214  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.286386  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.288012  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:59.288161  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45703
	I0731 19:29:59.288659  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:59.288820  130103 out.go:177]   - Using image docker.io/registry:2.8.3
	I0731 19:29:59.289107  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:29:59.289220  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:59.289680  130103 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0731 19:29:59.289692  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:59.289862  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.290299  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:59.290321  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.290362  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:29:59.290512  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:59.290688  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:59.290695  130103 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0731 19:29:59.290710  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0731 19:29:59.290733  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:59.290848  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:59.290949  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:59.291505  130103 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0731 19:29:59.291521  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0731 19:29:59.291537  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	W0731 19:29:59.292816  130103 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:59816->192.168.39.147:22: read: connection reset by peer
	I0731 19:29:59.292862  130103 retry.go:31] will retry after 189.807281ms: ssh: handshake failed: read tcp 192.168.39.1:59816->192.168.39.147:22: read: connection reset by peer
	I0731 19:29:59.292907  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:29:59.294515  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.294743  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.294787  130103 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0731 19:29:59.295033  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:59.295052  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.295088  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:59.295107  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.295277  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:59.295375  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:59.295482  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:59.295610  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:59.295642  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:59.295752  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:59.295765  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:59.295901  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:59.296099  130103 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 19:29:59.296113  130103 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 19:29:59.296124  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:29:59.299212  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.299655  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:29:59.299679  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:29:59.299854  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:29:59.300009  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:29:59.300136  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:29:59.300243  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:29:59.523765  130103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 19:29:59.523810  130103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 19:29:59.544137  130103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0731 19:29:59.560206  130103 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0731 19:29:59.560229  130103 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0731 19:29:59.589332  130103 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0731 19:29:59.589369  130103 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0731 19:29:59.645097  130103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 19:29:59.648217  130103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0731 19:29:59.650026  130103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 19:29:59.668901  130103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0731 19:29:59.677496  130103 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 19:29:59.677518  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0731 19:29:59.703544  130103 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0731 19:29:59.703569  130103 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0731 19:29:59.726007  130103 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0731 19:29:59.726033  130103 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0731 19:29:59.749272  130103 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0731 19:29:59.749309  130103 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0731 19:29:59.779413  130103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0731 19:29:59.789521  130103 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0731 19:29:59.789543  130103 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0731 19:29:59.800790  130103 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0731 19:29:59.800812  130103 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0731 19:29:59.805233  130103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0731 19:29:59.819018  130103 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0731 19:29:59.819040  130103 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0731 19:29:59.836362  130103 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0731 19:29:59.836382  130103 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0731 19:29:59.928997  130103 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 19:29:59.929021  130103 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 19:29:59.960439  130103 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0731 19:29:59.960467  130103 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0731 19:29:59.963759  130103 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0731 19:29:59.963780  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0731 19:29:59.988432  130103 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0731 19:29:59.988469  130103 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0731 19:30:00.023451  130103 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0731 19:30:00.023479  130103 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0731 19:30:00.117205  130103 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0731 19:30:00.117236  130103 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0731 19:30:00.121076  130103 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0731 19:30:00.121102  130103 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0731 19:30:00.140161  130103 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0731 19:30:00.140183  130103 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0731 19:30:00.249556  130103 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 19:30:00.249589  130103 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 19:30:00.270672  130103 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0731 19:30:00.270704  130103 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0731 19:30:00.299367  130103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0731 19:30:00.306796  130103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0731 19:30:00.352711  130103 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0731 19:30:00.352744  130103 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0731 19:30:00.371429  130103 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0731 19:30:00.371451  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0731 19:30:00.375932  130103 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0731 19:30:00.375948  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0731 19:30:00.430498  130103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 19:30:00.516745  130103 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0731 19:30:00.516774  130103 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0731 19:30:00.544205  130103 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0731 19:30:00.544232  130103 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0731 19:30:00.605046  130103 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0731 19:30:00.605079  130103 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0731 19:30:00.624781  130103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0731 19:30:00.704940  130103 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 19:30:00.704966  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0731 19:30:00.788107  130103 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0731 19:30:00.788131  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0731 19:30:01.007391  130103 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0731 19:30:01.007423  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0731 19:30:01.108677  130103 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0731 19:30:01.108705  130103 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0731 19:30:01.162346  130103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 19:30:01.406198  130103 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0731 19:30:01.406232  130103 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0731 19:30:01.510330  130103 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0731 19:30:01.510374  130103 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0731 19:30:01.610122  130103 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.086324195s)
	I0731 19:30:01.610181  130103 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.086349388s)
	I0731 19:30:01.610199  130103 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0731 19:30:01.611190  130103 node_ready.go:35] waiting up to 6m0s for node "addons-715925" to be "Ready" ...
	I0731 19:30:01.614483  130103 node_ready.go:49] node "addons-715925" has status "Ready":"True"
	I0731 19:30:01.614505  130103 node_ready.go:38] duration metric: took 3.288322ms for node "addons-715925" to be "Ready" ...
	I0731 19:30:01.614514  130103 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 19:30:01.621051  130103 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fzb4m" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:01.697412  130103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0731 19:30:01.825984  130103 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0731 19:30:01.826010  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0731 19:30:01.948227  130103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.404045582s)
	I0731 19:30:01.948285  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:01.948299  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:01.948686  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:01.948707  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:01.948717  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:01.948725  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:01.948733  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:01.948997  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:01.949010  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:02.114883  130103 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-715925" context rescaled to 1 replicas
	I0731 19:30:02.172617  130103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0731 19:30:03.690049  130103 pod_ready.go:102] pod "coredns-7db6d8ff4d-fzb4m" in "kube-system" namespace has status "Ready":"False"
	I0731 19:30:04.202506  130103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.557362637s)
	I0731 19:30:04.202564  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:04.202577  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:04.202570  130103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.554319225s)
	I0731 19:30:04.202596  130103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.552542661s)
	I0731 19:30:04.202612  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:04.202626  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:04.202636  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:04.202638  130103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.53370189s)
	I0731 19:30:04.202646  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:04.202671  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:04.202685  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:04.202915  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:04.202953  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:04.202962  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:04.202971  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:04.202978  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:04.203056  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:04.203073  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:04.203081  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:04.203089  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:04.203394  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:04.203412  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:04.203464  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:04.203473  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:04.203481  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:04.203488  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:04.203538  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:04.203546  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:04.203555  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:04.203562  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:04.204974  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:04.204999  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:04.205007  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:04.205007  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:04.205015  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:04.205020  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:04.205662  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:04.205683  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:04.205704  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:04.338869  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:04.338897  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:04.339275  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:04.339324  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:04.339343  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:05.724660  130103 pod_ready.go:102] pod "coredns-7db6d8ff4d-fzb4m" in "kube-system" namespace has status "Ready":"False"
	I0731 19:30:05.938838  130103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.15938541s)
	I0731 19:30:05.938896  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:05.938909  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:05.939274  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:05.939294  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:05.939307  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:05.939317  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:05.939581  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:05.939603  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:05.939621  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:06.078749  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:06.078773  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:06.079146  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:06.079164  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:06.079181  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:06.341992  130103 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0731 19:30:06.342047  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:30:06.345518  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:30:06.345956  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:30:06.345991  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:30:06.346152  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:30:06.346381  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:30:06.346580  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:30:06.346740  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:30:06.681729  130103 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0731 19:30:06.930192  130103 addons.go:234] Setting addon gcp-auth=true in "addons-715925"
	I0731 19:30:06.930246  130103 host.go:66] Checking if "addons-715925" exists ...
	I0731 19:30:06.930576  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:30:06.930613  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:30:06.946803  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37947
	I0731 19:30:06.947300  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:30:06.947835  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:30:06.947862  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:30:06.948287  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:30:06.948759  130103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:30:06.948792  130103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:30:06.964506  130103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41877
	I0731 19:30:06.965001  130103 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:30:06.965609  130103 main.go:141] libmachine: Using API Version  1
	I0731 19:30:06.965640  130103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:30:06.966003  130103 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:30:06.966214  130103 main.go:141] libmachine: (addons-715925) Calling .GetState
	I0731 19:30:06.967889  130103 main.go:141] libmachine: (addons-715925) Calling .DriverName
	I0731 19:30:06.968120  130103 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0731 19:30:06.968141  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHHostname
	I0731 19:30:06.971089  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:30:06.971530  130103 main.go:141] libmachine: (addons-715925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:64:ee", ip: ""} in network mk-addons-715925: {Iface:virbr1 ExpiryTime:2024-07-31 20:29:19 +0000 UTC Type:0 Mac:52:54:00:6d:64:ee Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:addons-715925 Clientid:01:52:54:00:6d:64:ee}
	I0731 19:30:06.971560  130103 main.go:141] libmachine: (addons-715925) DBG | domain addons-715925 has defined IP address 192.168.39.147 and MAC address 52:54:00:6d:64:ee in network mk-addons-715925
	I0731 19:30:06.971779  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHPort
	I0731 19:30:06.972016  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHKeyPath
	I0731 19:30:06.972197  130103 main.go:141] libmachine: (addons-715925) Calling .GetSSHUsername
	I0731 19:30:06.972354  130103 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/addons-715925/id_rsa Username:docker}
	I0731 19:30:07.309680  130103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.504408475s)
	I0731 19:30:07.309733  130103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.01032075s)
	I0731 19:30:07.309745  130103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.002919625s)
	I0731 19:30:07.309771  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:07.309792  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:07.309771  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:07.309855  130103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.87932118s)
	I0731 19:30:07.309886  130103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.685073069s)
	I0731 19:30:07.309862  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:07.309911  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:07.309923  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:07.309888  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:07.309953  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:07.309746  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:07.309978  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:07.310086  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:07.310096  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:07.310114  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:07.310121  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:07.310384  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:07.310397  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:07.310406  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:07.310413  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:07.310615  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:07.310637  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:07.310658  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:07.310658  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:07.310668  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:07.310677  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:07.310681  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:07.310685  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:07.310689  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:07.310698  130103 addons.go:475] Verifying addon ingress=true in "addons-715925"
	I0731 19:30:07.310905  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:07.310933  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:07.310939  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:07.310940  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:07.310951  130103 addons.go:475] Verifying addon metrics-server=true in "addons-715925"
	I0731 19:30:07.310966  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:07.310974  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:07.310981  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:07.310990  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:07.311048  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:07.311056  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:07.311063  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:07.311070  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:07.311335  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:07.311360  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:07.311368  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:07.312340  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:07.312390  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:07.312403  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:07.312393  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:07.312525  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:07.312536  130103 addons.go:475] Verifying addon registry=true in "addons-715925"
	I0731 19:30:07.313377  130103 out.go:177] * Verifying ingress addon...
	I0731 19:30:07.315323  130103 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-715925 service yakd-dashboard -n yakd-dashboard
	
	I0731 19:30:07.315330  130103 out.go:177] * Verifying registry addon...
	I0731 19:30:07.316056  130103 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0731 19:30:07.317597  130103 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0731 19:30:07.321048  130103 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0731 19:30:07.321071  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:07.329891  130103 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0731 19:30:07.329913  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:07.904199  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:07.907131  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:07.909078  130103 pod_ready.go:102] pod "coredns-7db6d8ff4d-fzb4m" in "kube-system" namespace has status "Ready":"False"
	I0731 19:30:08.180702  130103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.018291521s)
	W0731 19:30:08.180774  130103 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0731 19:30:08.180831  130103 retry.go:31] will retry after 254.101349ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0731 19:30:08.322901  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:08.325188  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:08.435734  130103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 19:30:08.839063  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:08.849994  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:09.097097  130103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.399621776s)
	I0731 19:30:09.097147  130103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.924485604s)
	I0731 19:30:09.097163  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:09.097179  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:09.097180  130103 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.129040033s)
	I0731 19:30:09.097194  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:09.097211  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:09.097508  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:09.097520  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:09.097576  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:09.097587  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:09.097582  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:09.097619  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:09.097630  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:09.097596  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:09.097683  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:09.097691  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:09.097875  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:09.097952  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:09.097964  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:09.098005  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:09.098017  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:09.098026  130103 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-715925"
	I0731 19:30:09.098949  130103 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 19:30:09.099884  130103 out.go:177] * Verifying csi-hostpath-driver addon...
	I0731 19:30:09.101447  130103 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0731 19:30:09.102184  130103 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0731 19:30:09.102822  130103 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0731 19:30:09.102836  130103 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0731 19:30:09.119899  130103 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0731 19:30:09.119925  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:09.168738  130103 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0731 19:30:09.168765  130103 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0731 19:30:09.232292  130103 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0731 19:30:09.232318  130103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0731 19:30:09.320721  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:09.323672  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:09.333002  130103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0731 19:30:09.610514  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:09.831394  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:09.843995  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:10.109235  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:10.132791  130103 pod_ready.go:92] pod "coredns-7db6d8ff4d-fzb4m" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:10.132815  130103 pod_ready.go:81] duration metric: took 8.511731828s for pod "coredns-7db6d8ff4d-fzb4m" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:10.132824  130103 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wm9kw" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:10.320822  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:10.324002  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:10.464239  130103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.028457595s)
	I0731 19:30:10.464294  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:10.464311  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:10.464659  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:10.464694  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:10.464707  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:10.464725  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:10.464734  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:10.464975  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:10.464991  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:10.653577  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:10.774948  130103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.441898325s)
	I0731 19:30:10.775019  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:10.775037  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:10.775335  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:10.775373  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:10.775378  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:10.775426  130103 main.go:141] libmachine: Making call to close driver server
	I0731 19:30:10.775441  130103 main.go:141] libmachine: (addons-715925) Calling .Close
	I0731 19:30:10.775707  130103 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:30:10.775728  130103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:30:10.775710  130103 main.go:141] libmachine: (addons-715925) DBG | Closing plugin on server side
	I0731 19:30:10.777007  130103 addons.go:475] Verifying addon gcp-auth=true in "addons-715925"
	I0731 19:30:10.778764  130103 out.go:177] * Verifying gcp-auth addon...
	I0731 19:30:10.780852  130103 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0731 19:30:10.794559  130103 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0731 19:30:10.794581  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:10.820773  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:10.844627  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:11.107541  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:11.285569  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:11.320927  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:11.323406  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:11.612247  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:11.784896  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:11.822434  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:11.824147  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:12.108582  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:12.139316  130103 pod_ready.go:102] pod "coredns-7db6d8ff4d-wm9kw" in "kube-system" namespace has status "Ready":"False"
	I0731 19:30:12.285203  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:12.322100  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:12.322509  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:12.608999  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:12.785444  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:12.820479  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:12.823103  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:13.108650  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:13.285097  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:13.321835  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:13.323511  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:13.609142  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:13.785225  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:13.821962  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:13.825212  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:14.110490  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:14.139607  130103 pod_ready.go:102] pod "coredns-7db6d8ff4d-wm9kw" in "kube-system" namespace has status "Ready":"False"
	I0731 19:30:14.285537  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:14.320573  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:14.324873  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:14.608134  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:14.784448  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:14.822307  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:14.823425  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:15.108344  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:15.288727  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:15.323142  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:15.327388  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:15.608711  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:15.938457  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:15.939006  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:15.942476  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:16.107575  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:16.140071  130103 pod_ready.go:102] pod "coredns-7db6d8ff4d-wm9kw" in "kube-system" namespace has status "Ready":"False"
	I0731 19:30:16.284420  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:16.333828  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:16.336572  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:16.608339  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:16.639384  130103 pod_ready.go:97] pod "coredns-7db6d8ff4d-wm9kw" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-31 19:30:16 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-31 19:29:58 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-31 19:29:58 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-31 19:29:58 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-31 19:29:58 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.147 HostIPs:[{IP:192.168.39
.147}] PodIP: PodIPs:[] StartTime:2024-07-31 19:29:58 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-07-31 19:30:03 +0000 UTC,FinishedAt:2024-07-31 19:30:14 +0000 UTC,ContainerID:cri-o://f657bab2874e87ea97fccfa5dbe80ab18fdf1d8024fdbea331cdbecc5eecbaaa,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://f657bab2874e87ea97fccfa5dbe80ab18fdf1d8024fdbea331cdbecc5eecbaaa Started:0xc001eb6690 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0731 19:30:16.639414  130103 pod_ready.go:81] duration metric: took 6.506584187s for pod "coredns-7db6d8ff4d-wm9kw" in "kube-system" namespace to be "Ready" ...
	E0731 19:30:16.639426  130103 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-wm9kw" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-31 19:30:16 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-31 19:29:58 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-31 19:29:58 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-31 19:29:58 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-31 19:29:58 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.147 HostIPs:[{IP:192.168.39.147}] PodIP: PodIPs:[] StartTime:2024-07-31 19:29:58 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-07-31 19:30:03 +0000 UTC,FinishedAt:2024-07-31 19:30:14 +0000 UTC,ContainerID:cri-o://f657bab2874e87ea97fccfa5dbe80ab18fdf1d8024fdbea331cdbecc5eecbaaa,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://f657bab2874e87ea97fccfa5dbe80ab18fdf1d8024fdbea331cdbecc5eecbaaa Started:0xc001eb6690 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0731 19:30:16.639435  130103 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-715925" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:16.644465  130103 pod_ready.go:92] pod "etcd-addons-715925" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:16.644484  130103 pod_ready.go:81] duration metric: took 5.041381ms for pod "etcd-addons-715925" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:16.644492  130103 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-715925" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:16.656259  130103 pod_ready.go:92] pod "kube-apiserver-addons-715925" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:16.656294  130103 pod_ready.go:81] duration metric: took 11.791708ms for pod "kube-apiserver-addons-715925" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:16.656309  130103 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-715925" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:16.663955  130103 pod_ready.go:92] pod "kube-controller-manager-addons-715925" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:16.663978  130103 pod_ready.go:81] duration metric: took 7.66022ms for pod "kube-controller-manager-addons-715925" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:16.663991  130103 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tfzvz" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:16.670766  130103 pod_ready.go:92] pod "kube-proxy-tfzvz" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:16.670797  130103 pod_ready.go:81] duration metric: took 6.797853ms for pod "kube-proxy-tfzvz" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:16.670809  130103 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-715925" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:16.784850  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:16.821364  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:16.825545  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:17.037796  130103 pod_ready.go:92] pod "kube-scheduler-addons-715925" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:17.037825  130103 pod_ready.go:81] duration metric: took 367.007684ms for pod "kube-scheduler-addons-715925" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:17.037835  130103 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-2p88n" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:17.107386  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:17.284140  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:17.324480  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:17.326449  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:17.608348  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:17.785030  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:17.821031  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:17.822240  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:18.109050  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:18.286464  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:18.320140  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:18.324065  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:18.607607  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:18.784799  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:18.821885  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:18.822639  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:19.060626  130103 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2p88n" in "kube-system" namespace has status "Ready":"False"
	I0731 19:30:19.108046  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:19.285317  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:19.320629  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:19.322255  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:19.608430  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:19.786083  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:19.821142  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:19.823036  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:20.107792  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:20.284717  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:20.320295  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:20.321777  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:20.608954  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:20.785228  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:20.821972  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:20.829672  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:21.108719  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:21.287333  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:21.319838  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:21.323034  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:21.543633  130103 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2p88n" in "kube-system" namespace has status "Ready":"False"
	I0731 19:30:21.608221  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:21.784581  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:21.822616  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:21.823526  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:22.108156  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:22.285278  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:22.320972  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:22.322913  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:22.619640  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:22.783897  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:22.820596  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:22.822881  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:23.107464  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:23.284684  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:23.320486  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:23.322403  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:23.543677  130103 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2p88n" in "kube-system" namespace has status "Ready":"False"
	I0731 19:30:23.611258  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:23.785016  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:23.821277  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:23.824185  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:24.111987  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:24.284483  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:24.320595  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:24.323483  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:24.608319  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:24.785308  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:24.822549  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:24.823079  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:25.108362  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:25.285081  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:25.320945  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:25.321872  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:25.546465  130103 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-2p88n" in "kube-system" namespace has status "Ready":"False"
	I0731 19:30:25.607131  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:25.785199  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:25.821306  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:25.822756  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:26.107765  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:26.634799  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:26.635500  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:26.635730  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:26.635853  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:26.784401  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:26.820815  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:26.823711  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:27.107799  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:27.285451  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:27.320811  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:27.322907  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:27.608647  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:27.785012  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:27.820858  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:27.822307  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:28.044774  130103 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-2p88n" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:28.044795  130103 pod_ready.go:81] duration metric: took 11.006953747s for pod "nvidia-device-plugin-daemonset-2p88n" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:28.044803  130103 pod_ready.go:38] duration metric: took 26.430278638s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 19:30:28.044820  130103 api_server.go:52] waiting for apiserver process to appear ...
	I0731 19:30:28.044872  130103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:30:28.064085  130103 api_server.go:72] duration metric: took 28.930117985s to wait for apiserver process to appear ...
	I0731 19:30:28.064118  130103 api_server.go:88] waiting for apiserver healthz status ...
	I0731 19:30:28.064161  130103 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0731 19:30:28.069566  130103 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I0731 19:30:28.070493  130103 api_server.go:141] control plane version: v1.30.3
	I0731 19:30:28.070510  130103 api_server.go:131] duration metric: took 6.384944ms to wait for apiserver health ...
	I0731 19:30:28.070518  130103 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 19:30:28.082601  130103 system_pods.go:59] 18 kube-system pods found
	I0731 19:30:28.082651  130103 system_pods.go:61] "coredns-7db6d8ff4d-fzb4m" [43b53489-b06e-4cb4-9515-be6b4e7f5588] Running
	I0731 19:30:28.082663  130103 system_pods.go:61] "csi-hostpath-attacher-0" [139d55af-90b3-45b0-92dc-f37933d17669] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0731 19:30:28.082673  130103 system_pods.go:61] "csi-hostpath-resizer-0" [f4f165ba-2937-41b7-9dac-e9a67ff22feb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0731 19:30:28.082684  130103 system_pods.go:61] "csi-hostpathplugin-4j5wp" [cd5f8368-bef5-476f-ab47-b7c63c2ec4f7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0731 19:30:28.082691  130103 system_pods.go:61] "etcd-addons-715925" [c0548359-7576-4e62-9bfb-3402be548366] Running
	I0731 19:30:28.082697  130103 system_pods.go:61] "kube-apiserver-addons-715925" [4d5bab31-ad1d-4a7b-bab2-e3b6ada76520] Running
	I0731 19:30:28.082702  130103 system_pods.go:61] "kube-controller-manager-addons-715925" [c6016a60-c185-493a-9390-d012bf650d44] Running
	I0731 19:30:28.082709  130103 system_pods.go:61] "kube-ingress-dns-minikube" [bbc90c8c-9f3d-43fa-bd6d-1bbfc26c8397] Running
	I0731 19:30:28.082713  130103 system_pods.go:61] "kube-proxy-tfzvz" [6f30c198-5a23-42cb-8a8a-3e81ac3dce14] Running
	I0731 19:30:28.082718  130103 system_pods.go:61] "kube-scheduler-addons-715925" [7a801eb1-d479-4df9-ad7e-be2807f32007] Running
	I0731 19:30:28.082726  130103 system_pods.go:61] "metrics-server-c59844bb4-s4tts" [16f96003-84b9-4f23-a5c6-b1f5047bf0f7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 19:30:28.082735  130103 system_pods.go:61] "nvidia-device-plugin-daemonset-2p88n" [8b668c12-5647-4aa6-b190-d9e2e127ea94] Running
	I0731 19:30:28.082743  130103 system_pods.go:61] "registry-698f998955-x87x7" [2a48b934-362f-4a2d-b591-308e178c9f76] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0731 19:30:28.082752  130103 system_pods.go:61] "registry-proxy-2j7k4" [2550e10a-7f6c-463d-a4b7-da2406bd5137] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0731 19:30:28.082765  130103 system_pods.go:61] "snapshot-controller-745499f584-9n7kz" [28c39ab0-f8ef-4a21-900f-a53ede22dced] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0731 19:30:28.082780  130103 system_pods.go:61] "snapshot-controller-745499f584-nlmlq" [82bcba7d-98ac-4401-8b9a-aa6a93bdc494] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0731 19:30:28.082786  130103 system_pods.go:61] "storage-provisioner" [126127c5-8cd2-4f4e-8f76-e3bc2eb6eca3] Running
	I0731 19:30:28.082795  130103 system_pods.go:61] "tiller-deploy-6677d64bcd-9f7w2" [451aed79-261a-45ab-aa7c-e595c0dd9688] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0731 19:30:28.082807  130103 system_pods.go:74] duration metric: took 12.282368ms to wait for pod list to return data ...
	I0731 19:30:28.082818  130103 default_sa.go:34] waiting for default service account to be created ...
	I0731 19:30:28.094245  130103 default_sa.go:45] found service account: "default"
	I0731 19:30:28.094269  130103 default_sa.go:55] duration metric: took 11.441558ms for default service account to be created ...
	I0731 19:30:28.094278  130103 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 19:30:28.106776  130103 system_pods.go:86] 18 kube-system pods found
	I0731 19:30:28.106803  130103 system_pods.go:89] "coredns-7db6d8ff4d-fzb4m" [43b53489-b06e-4cb4-9515-be6b4e7f5588] Running
	I0731 19:30:28.106810  130103 system_pods.go:89] "csi-hostpath-attacher-0" [139d55af-90b3-45b0-92dc-f37933d17669] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0731 19:30:28.106816  130103 system_pods.go:89] "csi-hostpath-resizer-0" [f4f165ba-2937-41b7-9dac-e9a67ff22feb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0731 19:30:28.106823  130103 system_pods.go:89] "csi-hostpathplugin-4j5wp" [cd5f8368-bef5-476f-ab47-b7c63c2ec4f7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0731 19:30:28.106830  130103 system_pods.go:89] "etcd-addons-715925" [c0548359-7576-4e62-9bfb-3402be548366] Running
	I0731 19:30:28.106837  130103 system_pods.go:89] "kube-apiserver-addons-715925" [4d5bab31-ad1d-4a7b-bab2-e3b6ada76520] Running
	I0731 19:30:28.106843  130103 system_pods.go:89] "kube-controller-manager-addons-715925" [c6016a60-c185-493a-9390-d012bf650d44] Running
	I0731 19:30:28.106849  130103 system_pods.go:89] "kube-ingress-dns-minikube" [bbc90c8c-9f3d-43fa-bd6d-1bbfc26c8397] Running
	I0731 19:30:28.106858  130103 system_pods.go:89] "kube-proxy-tfzvz" [6f30c198-5a23-42cb-8a8a-3e81ac3dce14] Running
	I0731 19:30:28.106864  130103 system_pods.go:89] "kube-scheduler-addons-715925" [7a801eb1-d479-4df9-ad7e-be2807f32007] Running
	I0731 19:30:28.106870  130103 system_pods.go:89] "metrics-server-c59844bb4-s4tts" [16f96003-84b9-4f23-a5c6-b1f5047bf0f7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 19:30:28.106878  130103 system_pods.go:89] "nvidia-device-plugin-daemonset-2p88n" [8b668c12-5647-4aa6-b190-d9e2e127ea94] Running
	I0731 19:30:28.106885  130103 system_pods.go:89] "registry-698f998955-x87x7" [2a48b934-362f-4a2d-b591-308e178c9f76] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0731 19:30:28.106897  130103 system_pods.go:89] "registry-proxy-2j7k4" [2550e10a-7f6c-463d-a4b7-da2406bd5137] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0731 19:30:28.106911  130103 system_pods.go:89] "snapshot-controller-745499f584-9n7kz" [28c39ab0-f8ef-4a21-900f-a53ede22dced] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0731 19:30:28.106922  130103 system_pods.go:89] "snapshot-controller-745499f584-nlmlq" [82bcba7d-98ac-4401-8b9a-aa6a93bdc494] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0731 19:30:28.106928  130103 system_pods.go:89] "storage-provisioner" [126127c5-8cd2-4f4e-8f76-e3bc2eb6eca3] Running
	I0731 19:30:28.106935  130103 system_pods.go:89] "tiller-deploy-6677d64bcd-9f7w2" [451aed79-261a-45ab-aa7c-e595c0dd9688] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0731 19:30:28.106942  130103 system_pods.go:126] duration metric: took 12.658446ms to wait for k8s-apps to be running ...
	I0731 19:30:28.106951  130103 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 19:30:28.106996  130103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:30:28.111242  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:28.123392  130103 system_svc.go:56] duration metric: took 16.431246ms WaitForService to wait for kubelet
	I0731 19:30:28.123422  130103 kubeadm.go:582] duration metric: took 28.98946126s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 19:30:28.123457  130103 node_conditions.go:102] verifying NodePressure condition ...
	I0731 19:30:28.128170  130103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 19:30:28.128204  130103 node_conditions.go:123] node cpu capacity is 2
	I0731 19:30:28.128219  130103 node_conditions.go:105] duration metric: took 4.75563ms to run NodePressure ...
	I0731 19:30:28.128234  130103 start.go:241] waiting for startup goroutines ...
	I0731 19:30:28.284867  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:28.321764  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:28.322152  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:28.608183  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:28.784686  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:28.820893  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:28.824617  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:29.106969  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:29.284750  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:29.320508  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:29.323565  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:29.608525  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:29.787318  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:29.822141  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:29.823751  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:30.107709  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:30.285108  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:30.321085  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:30.322192  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:30.608096  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:30.784597  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:30.820636  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:30.823929  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:31.108670  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:31.286877  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:31.322182  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:31.322670  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:31.607800  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:31.784678  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:31.820261  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:31.823417  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:32.108655  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:32.284858  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:32.321229  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:32.322407  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:32.608100  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:32.785866  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:32.820641  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:32.822049  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:33.546515  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:33.546591  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:33.550853  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:33.552588  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:33.608327  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:33.784891  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:33.822382  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:33.822949  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:34.108104  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:34.284665  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:34.320175  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:34.322593  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:34.610157  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:34.785070  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:34.834576  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:34.836095  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:35.108884  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:35.284584  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:35.320915  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:35.322928  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:35.608081  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:35.784287  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:35.821501  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:35.823517  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:36.109075  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:36.284829  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:36.320874  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:36.323358  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:36.607804  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:36.784376  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:36.820424  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:36.826230  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:37.108392  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:37.284956  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:37.320777  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:37.323424  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:37.607162  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:37.784421  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:37.819903  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:37.823411  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:38.111872  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:38.284897  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:38.321232  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:38.323522  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:38.609135  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:38.785516  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:38.821427  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:38.823187  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:39.108032  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:39.287315  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:39.321686  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:39.324049  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:39.608542  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:39.784899  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:39.820560  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:39.823068  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:40.107570  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:40.284786  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:40.320480  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:40.324633  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:40.614528  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:40.785169  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:40.821036  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:40.823254  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:41.109091  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:41.285068  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:41.322654  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:41.324005  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:41.608771  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:41.784417  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:41.820106  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:41.822337  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:42.108970  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:42.284505  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:42.320153  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:42.322256  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:42.610721  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:42.785207  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:42.821245  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:42.822795  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:43.107825  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:43.284694  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:43.321730  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:43.322427  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:43.613134  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:43.785104  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:43.824767  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 19:30:43.824803  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:44.110325  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:44.284715  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:44.320671  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:44.322673  130103 kapi.go:107] duration metric: took 37.005073658s to wait for kubernetes.io/minikube-addons=registry ...
	I0731 19:30:44.607680  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:44.784513  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:44.820404  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:45.119350  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:45.284909  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:45.321225  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:45.618067  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:45.784886  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:45.820681  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:46.109401  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:46.285776  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:46.363403  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:46.608900  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:46.785157  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:46.821720  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:47.108063  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:47.285019  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:47.320488  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:47.608933  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:47.785019  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:47.821203  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:48.108697  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:48.284785  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:48.320772  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:48.607788  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:48.785366  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:48.821268  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:49.108906  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:49.284111  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:49.320815  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:49.608474  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:49.784911  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:49.820668  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:50.107721  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:50.284715  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:50.320908  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:50.609905  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:50.784609  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:50.821912  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:51.107817  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:51.285157  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:51.321509  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:51.611084  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:52.013155  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:52.013952  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:52.109613  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:52.286015  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:52.322008  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:52.608132  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:52.784878  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:52.820495  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:53.114581  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:53.285484  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:53.321414  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:53.612479  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:53.785067  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:53.821766  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:54.108249  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:54.285087  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:54.320567  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:54.609647  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:54.784128  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:54.821013  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:55.107933  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:55.284469  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:55.320401  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:55.608365  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:55.784564  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:55.821376  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:56.108353  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:56.285192  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:56.321502  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:56.614584  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:56.793996  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:56.821850  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:57.138550  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:57.285886  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:57.321772  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:57.608126  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:57.784387  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:57.820041  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:58.111851  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:58.284357  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:58.321859  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:58.608776  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:58.785430  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:58.821268  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:59.107771  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:59.285822  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:59.321160  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:30:59.609107  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:30:59.784072  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:30:59.820612  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:00.109055  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:00.284901  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:00.320790  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:00.607949  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:00.784948  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:00.820802  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:01.108071  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:01.285098  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:01.321272  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:01.608388  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:01.785563  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:01.820188  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:02.110479  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:02.285134  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:02.321491  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:02.608651  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:02.784127  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:02.820914  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:03.107038  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:03.284773  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:03.320564  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:03.608628  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:04.157208  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:04.159944  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:04.162399  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:04.283937  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:04.320684  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:04.609148  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:04.784274  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:04.821221  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:05.107435  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:05.285083  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:05.320849  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:05.610064  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:05.785415  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:05.820464  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:06.108827  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:06.284511  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:06.319993  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:06.608086  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:06.785445  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:06.820441  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:07.108395  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:07.284902  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:07.321166  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:07.609429  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:07.785787  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:07.821505  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:08.107937  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:08.284722  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:08.320315  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:08.610636  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:08.784636  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:08.820936  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:09.107912  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:09.284234  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:09.320780  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:09.608358  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:09.784764  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:09.820580  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:10.108876  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:10.284708  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:10.321283  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:10.610522  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:10.784456  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:10.820684  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:11.107598  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:11.284774  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:11.324569  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:11.608315  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:11.784675  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:11.820071  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:12.107752  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:12.284758  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:12.320564  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:12.608544  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:12.785846  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:12.820802  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:13.107465  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:13.285182  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:13.322097  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:13.607876  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:13.784927  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:13.821643  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:14.108956  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:14.595832  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:14.598740  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:14.618431  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:14.785308  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:14.821234  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:15.108737  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:15.286112  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:15.324326  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:15.613274  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:15.784301  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:15.822120  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:16.109744  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:16.284976  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:16.320676  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:16.607781  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:16.785064  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:16.821036  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:17.107801  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:17.285151  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:17.324635  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:17.609216  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:17.784471  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:17.823014  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:18.107759  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:18.291268  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:18.324422  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:18.612428  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:18.785235  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:18.821523  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:19.109797  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:19.284162  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:19.334293  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:19.623490  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:19.786273  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:19.822155  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:20.107959  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:20.286896  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:20.321530  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:20.611566  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:20.783913  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:20.820822  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:21.108926  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:21.284224  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:21.321091  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:21.608172  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:21.785167  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:21.821414  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:22.108064  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:22.286795  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:22.320902  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:22.612951  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:22.785268  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:22.821009  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:23.112040  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:23.285554  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:23.320414  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:23.608217  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:23.784581  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:23.820660  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:24.107902  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:24.284946  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:24.320403  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:24.608054  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:24.785183  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:24.823197  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:25.269125  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:25.285597  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:25.321204  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:25.608196  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:25.784910  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:25.820647  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:26.107406  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:26.284723  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:26.320992  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:26.607632  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:26.786002  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:26.821179  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:27.107430  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:27.285111  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:27.321715  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:27.616803  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 19:31:27.784674  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:27.820412  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:28.109490  130103 kapi.go:107] duration metric: took 1m19.007303321s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0731 19:31:28.285067  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:28.320989  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:28.785997  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:28.821755  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:29.284682  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:29.321013  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:29.785056  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:29.821387  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:30.285078  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:30.320956  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:30.785368  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:30.820301  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:31.285121  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:31.321173  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:31.784206  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:31.821360  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:32.284904  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:32.320822  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:32.785642  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:32.820836  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:33.284383  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:33.321501  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:33.784707  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:33.820709  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:34.284977  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:34.321267  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:34.784048  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:34.821823  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:35.287149  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:35.324383  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:35.784451  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:35.821650  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:36.285313  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:36.325279  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:36.784057  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:36.821063  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:37.285326  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:37.321795  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:37.784169  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:37.820906  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:38.285137  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:38.321253  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:38.784221  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:38.821362  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:39.284604  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:39.321675  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:39.784755  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:39.820651  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:40.284872  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:40.320489  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:40.784446  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:40.820932  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:41.285066  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:41.322357  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:41.786164  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:41.821649  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:42.284496  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:42.320461  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:42.784535  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:42.820105  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:43.286457  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:43.321350  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:43.784987  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:43.821223  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:44.285154  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:44.320858  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:44.785369  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:44.820742  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:45.285326  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:45.323659  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:45.784504  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:45.820692  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:46.285644  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:46.322309  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:46.784823  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:46.820583  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:47.285382  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:47.322015  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:47.786911  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:47.820986  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:48.284940  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:48.321285  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:48.784789  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:48.821018  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:49.284719  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:49.321466  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:49.785464  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:49.821286  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:50.284942  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:50.321044  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:50.785667  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:50.820660  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:51.284989  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:51.321242  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:51.788417  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:51.820925  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:52.285041  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:52.321207  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:52.784226  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:52.821155  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:53.285102  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:53.321977  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:53.784472  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:53.820693  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:54.284635  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:54.320881  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:54.784539  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:54.820274  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:55.284591  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:55.320495  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:55.784950  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:55.821193  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:56.284074  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:56.320946  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:56.784968  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:56.821112  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:57.284255  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:57.321609  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:57.784898  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:57.825301  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:58.284851  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:58.321233  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:58.784430  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:58.820265  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:59.285251  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:59.321487  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:31:59.785405  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:31:59.820461  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:00.284852  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:00.321013  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:00.785272  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:00.822654  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:01.284432  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:01.321722  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:01.784721  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:01.820851  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:02.284654  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:02.320638  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:02.784555  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:02.820711  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:03.284812  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:03.321597  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:03.784674  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:03.820858  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:04.285029  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:04.320935  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:04.785934  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:04.821558  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:05.288813  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:05.325296  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:05.785254  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:05.821289  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:06.285214  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:06.320802  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:06.785048  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:06.820866  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:07.285046  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:07.322128  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:07.786575  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:07.820623  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:08.284745  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:08.320768  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:08.787741  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:08.821222  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:09.291709  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:09.321236  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:09.785566  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:09.820950  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:10.285134  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:10.321305  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:10.784338  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:10.821488  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:11.284321  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:11.322052  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:11.785321  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:11.821886  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:12.284417  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:12.320943  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:12.785162  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:12.821228  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:13.285175  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:13.321527  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:13.785190  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:13.821859  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:14.284062  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:14.322435  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:14.784410  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:14.820665  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:15.284593  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:15.320904  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:15.785564  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:15.820755  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:16.284857  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:16.320942  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:16.785362  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:16.820312  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:17.284516  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:17.322101  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:17.785047  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:17.821050  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:18.285396  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:18.320814  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:18.784435  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:18.820978  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:19.284468  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:19.321475  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:19.784460  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:19.820320  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:20.284834  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:20.321110  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:20.784075  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:20.823117  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:21.285547  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:21.322063  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:21.786145  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:21.821328  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:22.284115  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:22.321166  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:22.784896  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:22.820814  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:23.285158  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:23.321420  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:23.784975  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:23.821241  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:24.284917  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:24.321634  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:24.784597  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:24.820382  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:25.284323  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:25.321739  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:25.784827  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:25.820882  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:26.285175  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:26.321401  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:26.784247  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:26.820862  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:27.285096  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:27.322787  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:27.785153  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:27.821346  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:28.285353  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:28.322187  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:29.065777  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:29.067604  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:29.285918  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:29.321555  130103 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 19:32:29.784295  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:29.822643  130103 kapi.go:107] duration metric: took 2m22.506580424s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0731 19:32:30.285376  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:30.785187  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:31.285535  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:31.785442  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:32.285037  130103 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 19:32:32.785236  130103 kapi.go:107] duration metric: took 2m22.004379575s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0731 19:32:32.787096  130103 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-715925 cluster.
	I0731 19:32:32.788383  130103 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0731 19:32:32.789514  130103 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0731 19:32:32.790704  130103 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, cloud-spanner, storage-provisioner, default-storageclass, storage-provisioner-rancher, metrics-server, helm-tiller, yakd, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0731 19:32:32.791984  130103 addons.go:510] duration metric: took 2m33.657998011s for enable addons: enabled=[ingress-dns nvidia-device-plugin cloud-spanner storage-provisioner default-storageclass storage-provisioner-rancher metrics-server helm-tiller yakd inspektor-gadget volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0731 19:32:32.792032  130103 start.go:246] waiting for cluster config update ...
	I0731 19:32:32.792056  130103 start.go:255] writing updated cluster config ...
	I0731 19:32:32.792335  130103 ssh_runner.go:195] Run: rm -f paused
	I0731 19:32:32.847217  130103 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 19:32:32.849149  130103 out.go:177] * Done! kubectl is now configured to use "addons-715925" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 19:39:12 addons-715925 crio[681]: time="2024-07-31 19:39:12.258613873Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722454752258590384,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=45064edf-1e6a-467d-8e45-4bf4e5bd0a2e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:39:12 addons-715925 crio[681]: time="2024-07-31 19:39:12.259202834Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d807afb-c47b-471f-ac70-5ab2056a5b77 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:39:12 addons-715925 crio[681]: time="2024-07-31 19:39:12.259288832Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d807afb-c47b-471f-ac70-5ab2056a5b77 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:39:12 addons-715925 crio[681]: time="2024-07-31 19:39:12.259557417Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ed79966f6af9ecd4f1d92cff93ca01f67e231df8339417ed4a70a5bd37dc77a,PodSandboxId:4f059fa81b036762f55e37fc60476ed262fac5d8033b7bfdc7823c45ee08088e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722454574884511387,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-hw5cv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17f5ea4d-0f1d-4192-b5c5-8b98fc8ea159,},Annotations:map[string]string{io.kubernetes.container.hash: e455fdcd,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4eb5fac72ab9789e1ae64e1914c012999675d86c75bb25fc04024108f72f2af,PodSandboxId:90af3eb44c3b9b87f9e5cef21c5551cf0a9786b89ac1139ce572eba28d734387,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722454434696019220,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8401d1a8-6dd2-40c9-8e23-deb823f5b208,},Annotations:map[string]string{io.kubernet
es.container.hash: 62d342f3,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c174df487e549a44e4bcf555e30263a99c3c51908705a0c5f10e072b5549c6d8,PodSandboxId:ac6cc8b053bdb3bdb6b1af470a8f609ad7b6a80bae9836c268ad21042104db44,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722454357926521523,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a390ff63-8c7c-40de-a
874-20112644ffd4,},Annotations:map[string]string{io.kubernetes.container.hash: 2f8bde95,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0acc20d13d3bc1b75ebc726fafe9ff4ae146ce6bd01305036d3078a076c9e48d,PodSandboxId:ad5d705d2b17b756b1e64c67ff1ce241c932d5b1beba35de0e7359652e38ef4a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722454256728982665,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-s4tts,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 16f96003-84b9-4f23-a5c6-b1f5047bf0f7,},Annotations:map[string]string{io.kubernetes.container.hash: 501ef6d5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c88c0d9b413855503bc52c539befd82c696445beca7b2ce89e20c13859c542,PodSandboxId:b12d2a576bca9bb9bede1e19922be7fd3e2a99bfafbe9c8141823699a227e26f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722454207862452115,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 126127c5-8cd2-4f4e-8f76-e3bc2eb6eca3,},Annotations:map[string]string{io.kubernetes.container.hash: 131ec37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b5fe3d46d67178803535398ae11462cb0429aef871f008ebfbd08681ea4028c,PodSandboxId:c24ffde6a55f26d9d7699b205ed25be9b7beeae5ba21d8479993cd545de0743d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722454203005072752,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d
8ff4d-fzb4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43b53489-b06e-4cb4-9515-be6b4e7f5588,},Annotations:map[string]string{io.kubernetes.container.hash: 6f89aeb2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13d77da1e019e7b2e6441e752b0606f228eed93cdcf09b3bc25d4fe86b47752a,PodSandboxId:e7a00f7c882a2a0c962b046e4dc63d3696cd64086ed16900736a407ffbac2c40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d
01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722454201493789328,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfzvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f30c198-5a23-42cb-8a8a-3e81ac3dce14,},Annotations:map[string]string{io.kubernetes.container.hash: b6458cb1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:779ef57c86f86b0d98dedac94a771d1ce30371244d6438e008697acc9e5bf9b8,PodSandboxId:9867fdd58f216124e97b07e78d2cf248e529890ddd0b0fbdeeb09128aba4d04f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5
ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722454179684668090,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-715925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d8dd9fb67173c0838ca349b97994d63,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c81b526e401e8d13a85b39fa802c3b87acaf639eb2eb96413420d1fcb5c42814,PodSandboxId:03c861eb8f9baa0b351139e9b61cc6c3bc50ecaa86cbafdf6e69cf27d10cbea7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAI
NER_RUNNING,CreatedAt:1722454179690805575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-715925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dcf05e09ab846407ce6f5cc016c5936,},Annotations:map[string]string{io.kubernetes.container.hash: e1633df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecf8437cb53cce09c68107be89bbbf45d96c20680905e648351258872ea756c8,PodSandboxId:7a615a0816098b4b57ae43b5a6c84653e02217b4a93a8a50b26eb461c3da170f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722454
179635198579,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-715925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91dc58de568c063e3805468402f4b65e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7f5e7ab4069c46e45ea9fd19f37ce6e3e75d8124ef621d14425f38b33d0f0d5,PodSandboxId:4c9318500bde794a83060cb785866f7c0f0a8ab1b3cdc22ce7a8777fba61cf6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:172245
4179588690580,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-715925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d24d4034029e15cb6159863f99c4af6,},Annotations:map[string]string{io.kubernetes.container.hash: 7ae5f20f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d807afb-c47b-471f-ac70-5ab2056a5b77 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:39:12 addons-715925 crio[681]: time="2024-07-31 19:39:12.299127607Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=394592ec-808d-47b4-9d0b-2baa6720d952 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:39:12 addons-715925 crio[681]: time="2024-07-31 19:39:12.299246342Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=394592ec-808d-47b4-9d0b-2baa6720d952 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:39:12 addons-715925 crio[681]: time="2024-07-31 19:39:12.300432966Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7031a20b-cc12-49ee-8a27-f15efde1bbaf name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:39:12 addons-715925 crio[681]: time="2024-07-31 19:39:12.302135187Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722454752302100570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7031a20b-cc12-49ee-8a27-f15efde1bbaf name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:39:12 addons-715925 crio[681]: time="2024-07-31 19:39:12.307166012Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9509fde-0bfd-4fa2-8c30-616cdebafc86 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:39:12 addons-715925 crio[681]: time="2024-07-31 19:39:12.307359694Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9509fde-0bfd-4fa2-8c30-616cdebafc86 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:39:12 addons-715925 crio[681]: time="2024-07-31 19:39:12.308000964Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ed79966f6af9ecd4f1d92cff93ca01f67e231df8339417ed4a70a5bd37dc77a,PodSandboxId:4f059fa81b036762f55e37fc60476ed262fac5d8033b7bfdc7823c45ee08088e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722454574884511387,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-hw5cv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17f5ea4d-0f1d-4192-b5c5-8b98fc8ea159,},Annotations:map[string]string{io.kubernetes.container.hash: e455fdcd,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4eb5fac72ab9789e1ae64e1914c012999675d86c75bb25fc04024108f72f2af,PodSandboxId:90af3eb44c3b9b87f9e5cef21c5551cf0a9786b89ac1139ce572eba28d734387,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722454434696019220,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8401d1a8-6dd2-40c9-8e23-deb823f5b208,},Annotations:map[string]string{io.kubernet
es.container.hash: 62d342f3,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c174df487e549a44e4bcf555e30263a99c3c51908705a0c5f10e072b5549c6d8,PodSandboxId:ac6cc8b053bdb3bdb6b1af470a8f609ad7b6a80bae9836c268ad21042104db44,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722454357926521523,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a390ff63-8c7c-40de-a
874-20112644ffd4,},Annotations:map[string]string{io.kubernetes.container.hash: 2f8bde95,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0acc20d13d3bc1b75ebc726fafe9ff4ae146ce6bd01305036d3078a076c9e48d,PodSandboxId:ad5d705d2b17b756b1e64c67ff1ce241c932d5b1beba35de0e7359652e38ef4a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722454256728982665,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-s4tts,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 16f96003-84b9-4f23-a5c6-b1f5047bf0f7,},Annotations:map[string]string{io.kubernetes.container.hash: 501ef6d5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c88c0d9b413855503bc52c539befd82c696445beca7b2ce89e20c13859c542,PodSandboxId:b12d2a576bca9bb9bede1e19922be7fd3e2a99bfafbe9c8141823699a227e26f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722454207862452115,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 126127c5-8cd2-4f4e-8f76-e3bc2eb6eca3,},Annotations:map[string]string{io.kubernetes.container.hash: 131ec37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b5fe3d46d67178803535398ae11462cb0429aef871f008ebfbd08681ea4028c,PodSandboxId:c24ffde6a55f26d9d7699b205ed25be9b7beeae5ba21d8479993cd545de0743d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722454203005072752,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d
8ff4d-fzb4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43b53489-b06e-4cb4-9515-be6b4e7f5588,},Annotations:map[string]string{io.kubernetes.container.hash: 6f89aeb2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13d77da1e019e7b2e6441e752b0606f228eed93cdcf09b3bc25d4fe86b47752a,PodSandboxId:e7a00f7c882a2a0c962b046e4dc63d3696cd64086ed16900736a407ffbac2c40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d
01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722454201493789328,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfzvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f30c198-5a23-42cb-8a8a-3e81ac3dce14,},Annotations:map[string]string{io.kubernetes.container.hash: b6458cb1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:779ef57c86f86b0d98dedac94a771d1ce30371244d6438e008697acc9e5bf9b8,PodSandboxId:9867fdd58f216124e97b07e78d2cf248e529890ddd0b0fbdeeb09128aba4d04f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5
ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722454179684668090,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-715925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d8dd9fb67173c0838ca349b97994d63,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c81b526e401e8d13a85b39fa802c3b87acaf639eb2eb96413420d1fcb5c42814,PodSandboxId:03c861eb8f9baa0b351139e9b61cc6c3bc50ecaa86cbafdf6e69cf27d10cbea7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAI
NER_RUNNING,CreatedAt:1722454179690805575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-715925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dcf05e09ab846407ce6f5cc016c5936,},Annotations:map[string]string{io.kubernetes.container.hash: e1633df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecf8437cb53cce09c68107be89bbbf45d96c20680905e648351258872ea756c8,PodSandboxId:7a615a0816098b4b57ae43b5a6c84653e02217b4a93a8a50b26eb461c3da170f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722454
179635198579,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-715925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91dc58de568c063e3805468402f4b65e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7f5e7ab4069c46e45ea9fd19f37ce6e3e75d8124ef621d14425f38b33d0f0d5,PodSandboxId:4c9318500bde794a83060cb785866f7c0f0a8ab1b3cdc22ce7a8777fba61cf6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:172245
4179588690580,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-715925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d24d4034029e15cb6159863f99c4af6,},Annotations:map[string]string{io.kubernetes.container.hash: 7ae5f20f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f9509fde-0bfd-4fa2-8c30-616cdebafc86 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:39:12 addons-715925 crio[681]: time="2024-07-31 19:39:12.345821780Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=361a66f5-80b2-446b-bc98-02213217a656 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:39:12 addons-715925 crio[681]: time="2024-07-31 19:39:12.345952128Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=361a66f5-80b2-446b-bc98-02213217a656 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:39:12 addons-715925 crio[681]: time="2024-07-31 19:39:12.347477898Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5456c609-4134-48f1-b7c7-9643ce0fdd41 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:39:12 addons-715925 crio[681]: time="2024-07-31 19:39:12.348967085Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722454752348941366,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5456c609-4134-48f1-b7c7-9643ce0fdd41 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:39:12 addons-715925 crio[681]: time="2024-07-31 19:39:12.349582674Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=003ae51f-2004-48de-aed2-786fd929009b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:39:12 addons-715925 crio[681]: time="2024-07-31 19:39:12.349635275Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=003ae51f-2004-48de-aed2-786fd929009b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:39:12 addons-715925 crio[681]: time="2024-07-31 19:39:12.349863982Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ed79966f6af9ecd4f1d92cff93ca01f67e231df8339417ed4a70a5bd37dc77a,PodSandboxId:4f059fa81b036762f55e37fc60476ed262fac5d8033b7bfdc7823c45ee08088e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722454574884511387,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-hw5cv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17f5ea4d-0f1d-4192-b5c5-8b98fc8ea159,},Annotations:map[string]string{io.kubernetes.container.hash: e455fdcd,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4eb5fac72ab9789e1ae64e1914c012999675d86c75bb25fc04024108f72f2af,PodSandboxId:90af3eb44c3b9b87f9e5cef21c5551cf0a9786b89ac1139ce572eba28d734387,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722454434696019220,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8401d1a8-6dd2-40c9-8e23-deb823f5b208,},Annotations:map[string]string{io.kubernet
es.container.hash: 62d342f3,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c174df487e549a44e4bcf555e30263a99c3c51908705a0c5f10e072b5549c6d8,PodSandboxId:ac6cc8b053bdb3bdb6b1af470a8f609ad7b6a80bae9836c268ad21042104db44,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722454357926521523,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a390ff63-8c7c-40de-a
874-20112644ffd4,},Annotations:map[string]string{io.kubernetes.container.hash: 2f8bde95,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0acc20d13d3bc1b75ebc726fafe9ff4ae146ce6bd01305036d3078a076c9e48d,PodSandboxId:ad5d705d2b17b756b1e64c67ff1ce241c932d5b1beba35de0e7359652e38ef4a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722454256728982665,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-s4tts,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 16f96003-84b9-4f23-a5c6-b1f5047bf0f7,},Annotations:map[string]string{io.kubernetes.container.hash: 501ef6d5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c88c0d9b413855503bc52c539befd82c696445beca7b2ce89e20c13859c542,PodSandboxId:b12d2a576bca9bb9bede1e19922be7fd3e2a99bfafbe9c8141823699a227e26f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722454207862452115,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 126127c5-8cd2-4f4e-8f76-e3bc2eb6eca3,},Annotations:map[string]string{io.kubernetes.container.hash: 131ec37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b5fe3d46d67178803535398ae11462cb0429aef871f008ebfbd08681ea4028c,PodSandboxId:c24ffde6a55f26d9d7699b205ed25be9b7beeae5ba21d8479993cd545de0743d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722454203005072752,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d
8ff4d-fzb4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43b53489-b06e-4cb4-9515-be6b4e7f5588,},Annotations:map[string]string{io.kubernetes.container.hash: 6f89aeb2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13d77da1e019e7b2e6441e752b0606f228eed93cdcf09b3bc25d4fe86b47752a,PodSandboxId:e7a00f7c882a2a0c962b046e4dc63d3696cd64086ed16900736a407ffbac2c40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d
01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722454201493789328,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfzvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f30c198-5a23-42cb-8a8a-3e81ac3dce14,},Annotations:map[string]string{io.kubernetes.container.hash: b6458cb1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:779ef57c86f86b0d98dedac94a771d1ce30371244d6438e008697acc9e5bf9b8,PodSandboxId:9867fdd58f216124e97b07e78d2cf248e529890ddd0b0fbdeeb09128aba4d04f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5
ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722454179684668090,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-715925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d8dd9fb67173c0838ca349b97994d63,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c81b526e401e8d13a85b39fa802c3b87acaf639eb2eb96413420d1fcb5c42814,PodSandboxId:03c861eb8f9baa0b351139e9b61cc6c3bc50ecaa86cbafdf6e69cf27d10cbea7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAI
NER_RUNNING,CreatedAt:1722454179690805575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-715925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dcf05e09ab846407ce6f5cc016c5936,},Annotations:map[string]string{io.kubernetes.container.hash: e1633df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecf8437cb53cce09c68107be89bbbf45d96c20680905e648351258872ea756c8,PodSandboxId:7a615a0816098b4b57ae43b5a6c84653e02217b4a93a8a50b26eb461c3da170f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722454
179635198579,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-715925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91dc58de568c063e3805468402f4b65e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7f5e7ab4069c46e45ea9fd19f37ce6e3e75d8124ef621d14425f38b33d0f0d5,PodSandboxId:4c9318500bde794a83060cb785866f7c0f0a8ab1b3cdc22ce7a8777fba61cf6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:172245
4179588690580,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-715925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d24d4034029e15cb6159863f99c4af6,},Annotations:map[string]string{io.kubernetes.container.hash: 7ae5f20f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=003ae51f-2004-48de-aed2-786fd929009b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:39:12 addons-715925 crio[681]: time="2024-07-31 19:39:12.385490003Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5ff5d4b0-abec-4bc3-a273-8e8647de27e0 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:39:12 addons-715925 crio[681]: time="2024-07-31 19:39:12.385564385Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5ff5d4b0-abec-4bc3-a273-8e8647de27e0 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:39:12 addons-715925 crio[681]: time="2024-07-31 19:39:12.386845578Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=83e20580-c088-424d-84b5-ba1923e347f5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:39:12 addons-715925 crio[681]: time="2024-07-31 19:39:12.388106021Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722454752388080721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=83e20580-c088-424d-84b5-ba1923e347f5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:39:12 addons-715925 crio[681]: time="2024-07-31 19:39:12.388640659Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=01e12b6c-054c-460b-9242-97f3eb55148b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:39:12 addons-715925 crio[681]: time="2024-07-31 19:39:12.388748775Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=01e12b6c-054c-460b-9242-97f3eb55148b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:39:12 addons-715925 crio[681]: time="2024-07-31 19:39:12.389090663Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ed79966f6af9ecd4f1d92cff93ca01f67e231df8339417ed4a70a5bd37dc77a,PodSandboxId:4f059fa81b036762f55e37fc60476ed262fac5d8033b7bfdc7823c45ee08088e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722454574884511387,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-hw5cv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17f5ea4d-0f1d-4192-b5c5-8b98fc8ea159,},Annotations:map[string]string{io.kubernetes.container.hash: e455fdcd,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4eb5fac72ab9789e1ae64e1914c012999675d86c75bb25fc04024108f72f2af,PodSandboxId:90af3eb44c3b9b87f9e5cef21c5551cf0a9786b89ac1139ce572eba28d734387,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722454434696019220,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8401d1a8-6dd2-40c9-8e23-deb823f5b208,},Annotations:map[string]string{io.kubernet
es.container.hash: 62d342f3,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c174df487e549a44e4bcf555e30263a99c3c51908705a0c5f10e072b5549c6d8,PodSandboxId:ac6cc8b053bdb3bdb6b1af470a8f609ad7b6a80bae9836c268ad21042104db44,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722454357926521523,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a390ff63-8c7c-40de-a
874-20112644ffd4,},Annotations:map[string]string{io.kubernetes.container.hash: 2f8bde95,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0acc20d13d3bc1b75ebc726fafe9ff4ae146ce6bd01305036d3078a076c9e48d,PodSandboxId:ad5d705d2b17b756b1e64c67ff1ce241c932d5b1beba35de0e7359652e38ef4a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722454256728982665,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-s4tts,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 16f96003-84b9-4f23-a5c6-b1f5047bf0f7,},Annotations:map[string]string{io.kubernetes.container.hash: 501ef6d5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c88c0d9b413855503bc52c539befd82c696445beca7b2ce89e20c13859c542,PodSandboxId:b12d2a576bca9bb9bede1e19922be7fd3e2a99bfafbe9c8141823699a227e26f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722454207862452115,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 126127c5-8cd2-4f4e-8f76-e3bc2eb6eca3,},Annotations:map[string]string{io.kubernetes.container.hash: 131ec37,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b5fe3d46d67178803535398ae11462cb0429aef871f008ebfbd08681ea4028c,PodSandboxId:c24ffde6a55f26d9d7699b205ed25be9b7beeae5ba21d8479993cd545de0743d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722454203005072752,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d
8ff4d-fzb4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43b53489-b06e-4cb4-9515-be6b4e7f5588,},Annotations:map[string]string{io.kubernetes.container.hash: 6f89aeb2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13d77da1e019e7b2e6441e752b0606f228eed93cdcf09b3bc25d4fe86b47752a,PodSandboxId:e7a00f7c882a2a0c962b046e4dc63d3696cd64086ed16900736a407ffbac2c40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d
01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722454201493789328,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfzvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f30c198-5a23-42cb-8a8a-3e81ac3dce14,},Annotations:map[string]string{io.kubernetes.container.hash: b6458cb1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:779ef57c86f86b0d98dedac94a771d1ce30371244d6438e008697acc9e5bf9b8,PodSandboxId:9867fdd58f216124e97b07e78d2cf248e529890ddd0b0fbdeeb09128aba4d04f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5
ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722454179684668090,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-715925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d8dd9fb67173c0838ca349b97994d63,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c81b526e401e8d13a85b39fa802c3b87acaf639eb2eb96413420d1fcb5c42814,PodSandboxId:03c861eb8f9baa0b351139e9b61cc6c3bc50ecaa86cbafdf6e69cf27d10cbea7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAI
NER_RUNNING,CreatedAt:1722454179690805575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-715925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dcf05e09ab846407ce6f5cc016c5936,},Annotations:map[string]string{io.kubernetes.container.hash: e1633df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecf8437cb53cce09c68107be89bbbf45d96c20680905e648351258872ea756c8,PodSandboxId:7a615a0816098b4b57ae43b5a6c84653e02217b4a93a8a50b26eb461c3da170f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722454
179635198579,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-715925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91dc58de568c063e3805468402f4b65e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7f5e7ab4069c46e45ea9fd19f37ce6e3e75d8124ef621d14425f38b33d0f0d5,PodSandboxId:4c9318500bde794a83060cb785866f7c0f0a8ab1b3cdc22ce7a8777fba61cf6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:172245
4179588690580,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-715925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d24d4034029e15cb6159863f99c4af6,},Annotations:map[string]string{io.kubernetes.container.hash: 7ae5f20f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=01e12b6c-054c-460b-9242-97f3eb55148b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3ed79966f6af9       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   4f059fa81b036       hello-world-app-6778b5fc9f-hw5cv
	a4eb5fac72ab9       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         5 minutes ago       Running             nginx                     0                   90af3eb44c3b9       nginx
	c174df487e549       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   ac6cc8b053bdb       busybox
	0acc20d13d3bc       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   8 minutes ago       Running             metrics-server            0                   ad5d705d2b17b       metrics-server-c59844bb4-s4tts
	09c88c0d9b413       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        9 minutes ago       Running             storage-provisioner       0                   b12d2a576bca9       storage-provisioner
	3b5fe3d46d671       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        9 minutes ago       Running             coredns                   0                   c24ffde6a55f2       coredns-7db6d8ff4d-fzb4m
	13d77da1e019e       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                        9 minutes ago       Running             kube-proxy                0                   e7a00f7c882a2       kube-proxy-tfzvz
	c81b526e401e8       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        9 minutes ago       Running             etcd                      0                   03c861eb8f9ba       etcd-addons-715925
	779ef57c86f86       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                        9 minutes ago       Running             kube-scheduler            0                   9867fdd58f216       kube-scheduler-addons-715925
	ecf8437cb53cc       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                        9 minutes ago       Running             kube-controller-manager   0                   7a615a0816098       kube-controller-manager-addons-715925
	b7f5e7ab4069c       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                        9 minutes ago       Running             kube-apiserver            0                   4c9318500bde7       kube-apiserver-addons-715925
	
	
	==> coredns [3b5fe3d46d67178803535398ae11462cb0429aef871f008ebfbd08681ea4028c] <==
	[INFO] 10.244.0.7:50331 - 9430 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000163087s
	[INFO] 10.244.0.7:40064 - 62706 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00006072s
	[INFO] 10.244.0.7:40064 - 59340 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000085788s
	[INFO] 10.244.0.7:36130 - 43402 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00003539s
	[INFO] 10.244.0.7:36130 - 14964 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000050617s
	[INFO] 10.244.0.7:35793 - 44276 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000054444s
	[INFO] 10.244.0.7:35793 - 28919 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000126933s
	[INFO] 10.244.0.7:41597 - 54420 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000067684s
	[INFO] 10.244.0.7:41597 - 15249 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000168983s
	[INFO] 10.244.0.7:51563 - 54876 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000108786s
	[INFO] 10.244.0.7:51563 - 50266 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000131052s
	[INFO] 10.244.0.7:34565 - 35957 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000036978s
	[INFO] 10.244.0.7:34565 - 44663 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000042174s
	[INFO] 10.244.0.7:42123 - 49510 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00005782s
	[INFO] 10.244.0.7:42123 - 19303 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000028853s
	[INFO] 10.244.0.22:42275 - 3819 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000166041s
	[INFO] 10.244.0.22:46321 - 27025 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000807326s
	[INFO] 10.244.0.22:38141 - 32280 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123722s
	[INFO] 10.244.0.22:55685 - 39294 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000274728s
	[INFO] 10.244.0.22:49384 - 11311 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000116954s
	[INFO] 10.244.0.22:59354 - 10926 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00026542s
	[INFO] 10.244.0.22:49641 - 21222 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000694673s
	[INFO] 10.244.0.22:37007 - 25487 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000712432s
	[INFO] 10.244.0.27:54379 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000364485s
	[INFO] 10.244.0.27:55795 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000098202s
	
	
	==> describe nodes <==
	Name:               addons-715925
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-715925
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825
	                    minikube.k8s.io/name=addons-715925
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T19_29_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-715925
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 19:29:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-715925
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 19:39:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 19:36:23 +0000   Wed, 31 Jul 2024 19:29:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 19:36:23 +0000   Wed, 31 Jul 2024 19:29:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 19:36:23 +0000   Wed, 31 Jul 2024 19:29:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 19:36:23 +0000   Wed, 31 Jul 2024 19:29:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.147
	  Hostname:    addons-715925
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 c12009eb379d4987aaee89629ea0d81e
	  System UUID:                c12009eb-379d-4987-aaee-89629ea0d81e
	  Boot ID:                    db862b72-c89b-4454-bb24-c704de455a63
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m38s
	  default                     hello-world-app-6778b5fc9f-hw5cv         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m1s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m22s
	  kube-system                 coredns-7db6d8ff4d-fzb4m                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     9m14s
	  kube-system                 etcd-addons-715925                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         9m27s
	  kube-system                 kube-apiserver-addons-715925             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m27s
	  kube-system                 kube-controller-manager-addons-715925    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m27s
	  kube-system                 kube-proxy-tfzvz                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m14s
	  kube-system                 kube-scheduler-addons-715925             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m27s
	  kube-system                 metrics-server-c59844bb4-s4tts           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         9m8s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m8s                   kube-proxy       
	  Normal  Starting                 9m34s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m33s (x8 over 9m34s)  kubelet          Node addons-715925 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m33s (x8 over 9m34s)  kubelet          Node addons-715925 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m33s (x7 over 9m34s)  kubelet          Node addons-715925 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m27s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m27s                  kubelet          Node addons-715925 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m27s                  kubelet          Node addons-715925 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m27s                  kubelet          Node addons-715925 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9m26s                  kubelet          Node addons-715925 status is now: NodeReady
	  Normal  RegisteredNode           9m15s                  node-controller  Node addons-715925 event: Registered Node addons-715925 in Controller
	
	
	==> dmesg <==
	[  +5.365336] kauditd_printk_skb: 126 callbacks suppressed
	[  +6.532665] kauditd_printk_skb: 99 callbacks suppressed
	[ +37.616437] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.678739] kauditd_printk_skb: 30 callbacks suppressed
	[Jul31 19:31] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.212974] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.027127] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.583940] kauditd_printk_skb: 10 callbacks suppressed
	[Jul31 19:32] kauditd_printk_skb: 24 callbacks suppressed
	[ +16.959149] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.150327] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.201404] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.421112] kauditd_printk_skb: 4 callbacks suppressed
	[ +16.123969] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.101261] kauditd_printk_skb: 47 callbacks suppressed
	[Jul31 19:33] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.114737] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.415201] kauditd_printk_skb: 67 callbacks suppressed
	[  +5.450189] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.430019] kauditd_printk_skb: 12 callbacks suppressed
	[ +10.441184] kauditd_printk_skb: 65 callbacks suppressed
	[  +5.852375] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.556726] kauditd_printk_skb: 6 callbacks suppressed
	[Jul31 19:36] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.128661] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [c81b526e401e8d13a85b39fa802c3b87acaf639eb2eb96413420d1fcb5c42814] <==
	{"level":"warn","ts":"2024-07-31T19:31:14.578663Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"225.51853ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.147\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-07-31T19:31:14.578678Z","caller":"traceutil/trace.go:171","msg":"trace[593203547] range","detail":"{range_begin:/registry/masterleases/192.168.39.147; range_end:; response_count:1; response_revision:1055; }","duration":"225.557969ms","start":"2024-07-31T19:31:14.353115Z","end":"2024-07-31T19:31:14.578673Z","steps":["trace[593203547] 'agreement among raft nodes before linearized reading'  (duration: 225.499513ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T19:31:25.245605Z","caller":"traceutil/trace.go:171","msg":"trace[640099674] transaction","detail":"{read_only:false; response_revision:1157; number_of_response:1; }","duration":"382.600336ms","start":"2024-07-31T19:31:24.862924Z","end":"2024-07-31T19:31:25.245524Z","steps":["trace[640099674] 'process raft request'  (duration: 382.275219ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T19:31:25.246409Z","caller":"traceutil/trace.go:171","msg":"trace[1689728] linearizableReadLoop","detail":"{readStateIndex:1196; appliedIndex:1196; }","duration":"204.261325ms","start":"2024-07-31T19:31:25.041724Z","end":"2024-07-31T19:31:25.245986Z","steps":["trace[1689728] 'read index received'  (duration: 204.257116ms)","trace[1689728] 'applied index is now lower than readState.Index'  (duration: 3.634µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T19:31:25.246604Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.811463ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T19:31:25.246912Z","caller":"traceutil/trace.go:171","msg":"trace[1187332687] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1157; }","duration":"205.143161ms","start":"2024-07-31T19:31:25.041702Z","end":"2024-07-31T19:31:25.246845Z","steps":["trace[1187332687] 'agreement among raft nodes before linearized reading'  (duration: 204.733646ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T19:31:25.247484Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T19:31:24.86286Z","time spent":"383.846221ms","remote":"127.0.0.1:51002","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1117 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"warn","ts":"2024-07-31T19:31:25.250962Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.31896ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85649"}
	{"level":"info","ts":"2024-07-31T19:31:25.25107Z","caller":"traceutil/trace.go:171","msg":"trace[1549059227] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1158; }","duration":"159.452798ms","start":"2024-07-31T19:31:25.091609Z","end":"2024-07-31T19:31:25.251061Z","steps":["trace[1549059227] 'agreement among raft nodes before linearized reading'  (duration: 159.171833ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T19:32:29.04634Z","caller":"traceutil/trace.go:171","msg":"trace[1974567133] linearizableReadLoop","detail":"{readStateIndex:1339; appliedIndex:1338; }","duration":"279.102742ms","start":"2024-07-31T19:32:28.767211Z","end":"2024-07-31T19:32:29.046314Z","steps":["trace[1974567133] 'read index received'  (duration: 278.98642ms)","trace[1974567133] 'applied index is now lower than readState.Index'  (duration: 115.784µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T19:32:29.046554Z","caller":"traceutil/trace.go:171","msg":"trace[2035373745] transaction","detail":"{read_only:false; response_revision:1286; number_of_response:1; }","duration":"464.125858ms","start":"2024-07-31T19:32:28.582412Z","end":"2024-07-31T19:32:29.046538Z","steps":["trace[2035373745] 'process raft request'  (duration: 463.80075ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T19:32:29.046709Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T19:32:28.582398Z","time spent":"464.207879ms","remote":"127.0.0.1:50934","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1282 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-07-31T19:32:29.046823Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"279.599749ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:1 size:4367"}
	{"level":"info","ts":"2024-07-31T19:32:29.046952Z","caller":"traceutil/trace.go:171","msg":"trace[1826426069] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:1; response_revision:1286; }","duration":"279.732516ms","start":"2024-07-31T19:32:28.767207Z","end":"2024-07-31T19:32:29.046939Z","steps":["trace[1826426069] 'agreement among raft nodes before linearized reading'  (duration: 279.365528ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T19:32:29.047022Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"243.699097ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-07-31T19:32:29.047063Z","caller":"traceutil/trace.go:171","msg":"trace[1889849188] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1286; }","duration":"243.760946ms","start":"2024-07-31T19:32:28.803295Z","end":"2024-07-31T19:32:29.047056Z","steps":["trace[1889849188] 'agreement among raft nodes before linearized reading'  (duration: 243.624503ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T19:33:38.689559Z","caller":"traceutil/trace.go:171","msg":"trace[164354990] linearizableReadLoop","detail":"{readStateIndex:1931; appliedIndex:1930; }","duration":"398.653486ms","start":"2024-07-31T19:33:38.290861Z","end":"2024-07-31T19:33:38.689515Z","steps":["trace[164354990] 'read index received'  (duration: 398.433375ms)","trace[164354990] 'applied index is now lower than readState.Index'  (duration: 219.674µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T19:33:38.689819Z","caller":"traceutil/trace.go:171","msg":"trace[629265884] transaction","detail":"{read_only:false; response_revision:1852; number_of_response:1; }","duration":"412.02375ms","start":"2024-07-31T19:33:38.277776Z","end":"2024-07-31T19:33:38.6898Z","steps":["trace[629265884] 'process raft request'  (duration: 411.550986ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T19:33:38.690074Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"399.18898ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3609"}
	{"level":"warn","ts":"2024-07-31T19:33:38.690111Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T19:33:38.277755Z","time spent":"412.203683ms","remote":"127.0.0.1:51002","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":486,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:1766 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:427 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >"}
	{"level":"warn","ts":"2024-07-31T19:33:38.690244Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"267.109161ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses/csi-hostpath-snapclass\" ","response":"range_response_count:1 size:1176"}
	{"level":"info","ts":"2024-07-31T19:33:38.690299Z","caller":"traceutil/trace.go:171","msg":"trace[525055464] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses/csi-hostpath-snapclass; range_end:; response_count:1; response_revision:1852; }","duration":"267.183598ms","start":"2024-07-31T19:33:38.423105Z","end":"2024-07-31T19:33:38.690289Z","steps":["trace[525055464] 'agreement among raft nodes before linearized reading'  (duration: 267.087732ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T19:33:38.690133Z","caller":"traceutil/trace.go:171","msg":"trace[502315660] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1852; }","duration":"399.28118ms","start":"2024-07-31T19:33:38.290837Z","end":"2024-07-31T19:33:38.690118Z","steps":["trace[502315660] 'agreement among raft nodes before linearized reading'  (duration: 399.098813ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T19:33:38.69115Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T19:33:38.290825Z","time spent":"400.31255ms","remote":"127.0.0.1:50950","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":3632,"request content":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" "}
	{"level":"info","ts":"2024-07-31T19:34:29.134543Z","caller":"traceutil/trace.go:171","msg":"trace[586832778] transaction","detail":"{read_only:false; response_revision:2026; number_of_response:1; }","duration":"115.065421ms","start":"2024-07-31T19:34:29.019447Z","end":"2024-07-31T19:34:29.134513Z","steps":["trace[586832778] 'process raft request'  (duration: 114.776167ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:39:12 up 10 min,  0 users,  load average: 0.18, 0.59, 0.49
	Linux addons-715925 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b7f5e7ab4069c46e45ea9fd19f37ce6e3e75d8124ef621d14425f38b33d0f0d5] <==
	I0731 19:32:05.319421       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0731 19:32:44.722072       1 conn.go:339] Error on socket receive: read tcp 192.168.39.147:8443->192.168.39.1:55032: use of closed network connection
	E0731 19:32:44.940270       1 conn.go:339] Error on socket receive: read tcp 192.168.39.147:8443->192.168.39.1:55056: use of closed network connection
	I0731 19:33:13.870457       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0731 19:33:28.344401       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0731 19:33:28.560705       1 watch.go:250] http2: stream closed
	I0731 19:33:30.222315       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.73.217"}
	I0731 19:33:38.725385       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 19:33:38.725480       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 19:33:38.760821       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 19:33:38.760914       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 19:33:38.766490       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 19:33:38.766558       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 19:33:38.809024       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 19:33:38.809056       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 19:33:38.854787       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 19:33:38.855085       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0731 19:33:39.766945       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0731 19:33:39.855457       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0731 19:33:39.888209       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0731 19:33:44.644176       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0731 19:33:45.683583       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0731 19:33:50.125092       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0731 19:33:50.302755       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.191.185"}
	I0731 19:36:11.951771       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.241.161"}
	
	
	==> kube-controller-manager [ecf8437cb53cce09c68107be89bbbf45d96c20680905e648351258872ea756c8] <==
	W0731 19:37:03.113125       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 19:37:03.113313       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 19:37:06.527357       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 19:37:06.527406       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 19:37:09.618976       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 19:37:09.619112       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 19:37:16.000939       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 19:37:16.001053       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 19:37:51.488506       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 19:37:51.488744       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 19:37:58.206126       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 19:37:58.206185       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 19:38:03.598178       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 19:38:03.598287       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 19:38:03.734799       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 19:38:03.734939       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 19:38:28.521270       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 19:38:28.521444       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 19:38:31.514637       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 19:38:31.514701       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 19:38:48.741305       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 19:38:48.741364       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 19:38:58.529637       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 19:38:58.529681       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0731 19:39:11.385535       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="18.066µs"
	
	
	==> kube-proxy [13d77da1e019e7b2e6441e752b0606f228eed93cdcf09b3bc25d4fe86b47752a] <==
	I0731 19:30:02.839547       1 server_linux.go:69] "Using iptables proxy"
	I0731 19:30:02.909990       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.147"]
	I0731 19:30:04.573319       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 19:30:04.573364       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 19:30:04.573380       1 server_linux.go:165] "Using iptables Proxier"
	I0731 19:30:04.607464       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 19:30:04.607737       1 server.go:872] "Version info" version="v1.30.3"
	I0731 19:30:04.607753       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 19:30:04.627535       1 config.go:192] "Starting service config controller"
	I0731 19:30:04.627555       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 19:30:04.627585       1 config.go:101] "Starting endpoint slice config controller"
	I0731 19:30:04.627589       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 19:30:04.630647       1 config.go:319] "Starting node config controller"
	I0731 19:30:04.630657       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 19:30:04.730014       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 19:30:04.730069       1 shared_informer.go:320] Caches are synced for service config
	I0731 19:30:04.756100       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [779ef57c86f86b0d98dedac94a771d1ce30371244d6438e008697acc9e5bf9b8] <==
	W0731 19:29:42.666249       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 19:29:42.666284       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 19:29:42.666301       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 19:29:42.666307       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 19:29:42.666353       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 19:29:42.666381       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 19:29:42.666450       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 19:29:42.666461       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 19:29:43.497060       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 19:29:43.497168       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 19:29:43.591383       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 19:29:43.591751       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 19:29:43.667489       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 19:29:43.667624       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 19:29:43.770363       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 19:29:43.770530       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 19:29:43.847949       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 19:29:43.848519       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 19:29:43.872792       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 19:29:43.874292       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 19:29:43.901687       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 19:29:43.901791       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 19:29:43.962045       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 19:29:43.962140       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0731 19:29:45.536248       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 19:36:45 addons-715925 kubelet[1266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 19:36:45 addons-715925 kubelet[1266]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 19:36:45 addons-715925 kubelet[1266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 19:36:45 addons-715925 kubelet[1266]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 19:36:45 addons-715925 kubelet[1266]: I0731 19:36:45.854934    1266 scope.go:117] "RemoveContainer" containerID="4c0a0d9ea24c8f7b037f93184712db3555c0011ab694079725612736e2d36b92"
	Jul 31 19:36:45 addons-715925 kubelet[1266]: I0731 19:36:45.871100    1266 scope.go:117] "RemoveContainer" containerID="d7b13432b806ed5af7648fbfa6684ab2b53c0ae4f960d4a8e8d795f23019e89e"
	Jul 31 19:37:43 addons-715925 kubelet[1266]: I0731 19:37:43.175146    1266 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jul 31 19:37:45 addons-715925 kubelet[1266]: E0731 19:37:45.202482    1266 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 19:37:45 addons-715925 kubelet[1266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 19:37:45 addons-715925 kubelet[1266]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 19:37:45 addons-715925 kubelet[1266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 19:37:45 addons-715925 kubelet[1266]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 19:38:45 addons-715925 kubelet[1266]: E0731 19:38:45.202292    1266 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 19:38:45 addons-715925 kubelet[1266]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 19:38:45 addons-715925 kubelet[1266]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 19:38:45 addons-715925 kubelet[1266]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 19:38:45 addons-715925 kubelet[1266]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 19:39:03 addons-715925 kubelet[1266]: I0731 19:39:03.171656    1266 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jul 31 19:39:11 addons-715925 kubelet[1266]: I0731 19:39:11.412134    1266 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-hw5cv" podStartSLOduration=177.933714615 podStartE2EDuration="3m0.412102674s" podCreationTimestamp="2024-07-31 19:36:11 +0000 UTC" firstStartedPulling="2024-07-31 19:36:12.392410744 +0000 UTC m=+387.344583340" lastFinishedPulling="2024-07-31 19:36:14.870798802 +0000 UTC m=+389.822971399" observedRunningTime="2024-07-31 19:36:15.421977894 +0000 UTC m=+390.374150506" watchObservedRunningTime="2024-07-31 19:39:11.412102674 +0000 UTC m=+566.364275282"
	Jul 31 19:39:12 addons-715925 kubelet[1266]: I0731 19:39:12.808811    1266 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/16f96003-84b9-4f23-a5c6-b1f5047bf0f7-tmp-dir\") pod \"16f96003-84b9-4f23-a5c6-b1f5047bf0f7\" (UID: \"16f96003-84b9-4f23-a5c6-b1f5047bf0f7\") "
	Jul 31 19:39:12 addons-715925 kubelet[1266]: I0731 19:39:12.808965    1266 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9k8z\" (UniqueName: \"kubernetes.io/projected/16f96003-84b9-4f23-a5c6-b1f5047bf0f7-kube-api-access-c9k8z\") pod \"16f96003-84b9-4f23-a5c6-b1f5047bf0f7\" (UID: \"16f96003-84b9-4f23-a5c6-b1f5047bf0f7\") "
	Jul 31 19:39:12 addons-715925 kubelet[1266]: I0731 19:39:12.809594    1266 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16f96003-84b9-4f23-a5c6-b1f5047bf0f7-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "16f96003-84b9-4f23-a5c6-b1f5047bf0f7" (UID: "16f96003-84b9-4f23-a5c6-b1f5047bf0f7"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 31 19:39:12 addons-715925 kubelet[1266]: I0731 19:39:12.813076    1266 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16f96003-84b9-4f23-a5c6-b1f5047bf0f7-kube-api-access-c9k8z" (OuterVolumeSpecName: "kube-api-access-c9k8z") pod "16f96003-84b9-4f23-a5c6-b1f5047bf0f7" (UID: "16f96003-84b9-4f23-a5c6-b1f5047bf0f7"). InnerVolumeSpecName "kube-api-access-c9k8z". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 31 19:39:12 addons-715925 kubelet[1266]: I0731 19:39:12.909198    1266 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/16f96003-84b9-4f23-a5c6-b1f5047bf0f7-tmp-dir\") on node \"addons-715925\" DevicePath \"\""
	Jul 31 19:39:12 addons-715925 kubelet[1266]: I0731 19:39:12.909254    1266 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-c9k8z\" (UniqueName: \"kubernetes.io/projected/16f96003-84b9-4f23-a5c6-b1f5047bf0f7-kube-api-access-c9k8z\") on node \"addons-715925\" DevicePath \"\""
	
	
	==> storage-provisioner [09c88c0d9b413855503bc52c539befd82c696445beca7b2ce89e20c13859c542] <==
	I0731 19:30:08.671806       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 19:30:08.694463       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 19:30:08.695923       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 19:30:08.779586       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 19:30:08.779741       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-715925_02836a47-2513-4a36-9ad5-e52438ae791c!
	I0731 19:30:08.780688       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"adf40b22-8d5f-44f6-92d5-499e9a40e228", APIVersion:"v1", ResourceVersion:"792", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-715925_02836a47-2513-4a36-9ad5-e52438ae791c became leader
	I0731 19:30:08.880823       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-715925_02836a47-2513-4a36-9ad5-e52438ae791c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-715925 -n addons-715925
helpers_test.go:261: (dbg) Run:  kubectl --context addons-715925 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (358.48s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.44s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-715925
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-715925: exit status 82 (2m0.461917243s)

                                                
                                                
-- stdout --
	* Stopping node "addons-715925"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-715925" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-715925
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-715925: exit status 11 (21.685666156s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-715925" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-715925
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-715925: exit status 11 (6.143712578s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-715925" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-715925
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-715925: exit status 11 (6.144409157s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-715925" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (9.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-904202 ssh pgrep buildkitd: exit status 1 (194.567225ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 image build -t localhost/my-image:functional-904202 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-904202 image build -t localhost/my-image:functional-904202 testdata/build --alsologtostderr: (7.048632916s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-904202 image build -t localhost/my-image:functional-904202 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> a449b8befb3
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-904202
--> 788b49bf6e1
Successfully tagged localhost/my-image:functional-904202
788b49bf6e12a7a604d2566e3cc00f5c6bb7f531ba01f4fa9f2b3e046414fe6a
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-904202 image build -t localhost/my-image:functional-904202 testdata/build --alsologtostderr:
I0731 19:45:37.707465  139529 out.go:291] Setting OutFile to fd 1 ...
I0731 19:45:37.707632  139529 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 19:45:37.707642  139529 out.go:304] Setting ErrFile to fd 2...
I0731 19:45:37.707646  139529 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 19:45:37.707851  139529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
I0731 19:45:37.708557  139529 config.go:182] Loaded profile config "functional-904202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 19:45:37.709366  139529 config.go:182] Loaded profile config "functional-904202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 19:45:37.709950  139529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 19:45:37.710012  139529 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 19:45:37.727610  139529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33309
I0731 19:45:37.728281  139529 main.go:141] libmachine: () Calling .GetVersion
I0731 19:45:37.728978  139529 main.go:141] libmachine: Using API Version  1
I0731 19:45:37.729003  139529 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 19:45:37.729425  139529 main.go:141] libmachine: () Calling .GetMachineName
I0731 19:45:37.729623  139529 main.go:141] libmachine: (functional-904202) Calling .GetState
I0731 19:45:37.731799  139529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 19:45:37.731849  139529 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 19:45:37.748416  139529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45169
I0731 19:45:37.748854  139529 main.go:141] libmachine: () Calling .GetVersion
I0731 19:45:37.749603  139529 main.go:141] libmachine: Using API Version  1
I0731 19:45:37.749623  139529 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 19:45:37.750172  139529 main.go:141] libmachine: () Calling .GetMachineName
I0731 19:45:37.750385  139529 main.go:141] libmachine: (functional-904202) Calling .DriverName
I0731 19:45:37.750710  139529 ssh_runner.go:195] Run: systemctl --version
I0731 19:45:37.750747  139529 main.go:141] libmachine: (functional-904202) Calling .GetSSHHostname
I0731 19:45:37.754173  139529 main.go:141] libmachine: (functional-904202) DBG | domain functional-904202 has defined MAC address 52:54:00:2c:ae:4a in network mk-functional-904202
I0731 19:45:37.754730  139529 main.go:141] libmachine: (functional-904202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ae:4a", ip: ""} in network mk-functional-904202: {Iface:virbr1 ExpiryTime:2024-07-31 20:42:54 +0000 UTC Type:0 Mac:52:54:00:2c:ae:4a Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:functional-904202 Clientid:01:52:54:00:2c:ae:4a}
I0731 19:45:37.754752  139529 main.go:141] libmachine: (functional-904202) DBG | domain functional-904202 has defined IP address 192.168.39.96 and MAC address 52:54:00:2c:ae:4a in network mk-functional-904202
I0731 19:45:37.755129  139529 main.go:141] libmachine: (functional-904202) Calling .GetSSHPort
I0731 19:45:37.755323  139529 main.go:141] libmachine: (functional-904202) Calling .GetSSHKeyPath
I0731 19:45:37.755565  139529 main.go:141] libmachine: (functional-904202) Calling .GetSSHUsername
I0731 19:45:37.755741  139529 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/functional-904202/id_rsa Username:docker}
I0731 19:45:37.865037  139529 build_images.go:161] Building image from path: /tmp/build.1264411844.tar
I0731 19:45:37.865096  139529 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0731 19:45:37.886663  139529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1264411844.tar
I0731 19:45:37.896951  139529 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1264411844.tar: stat -c "%s %y" /var/lib/minikube/build/build.1264411844.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1264411844.tar': No such file or directory
I0731 19:45:37.896999  139529 ssh_runner.go:362] scp /tmp/build.1264411844.tar --> /var/lib/minikube/build/build.1264411844.tar (3072 bytes)
I0731 19:45:37.950093  139529 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1264411844
I0731 19:45:37.994325  139529 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1264411844 -xf /var/lib/minikube/build/build.1264411844.tar
I0731 19:45:38.023570  139529 crio.go:315] Building image: /var/lib/minikube/build/build.1264411844
I0731 19:45:38.023650  139529 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-904202 /var/lib/minikube/build/build.1264411844 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0731 19:45:44.645355  139529 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-904202 /var/lib/minikube/build/build.1264411844 --cgroup-manager=cgroupfs: (6.621659105s)
I0731 19:45:44.645445  139529 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1264411844
I0731 19:45:44.670273  139529 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1264411844.tar
I0731 19:45:44.705016  139529 build_images.go:217] Built localhost/my-image:functional-904202 from /tmp/build.1264411844.tar
I0731 19:45:44.705084  139529 build_images.go:133] succeeded building to: functional-904202
I0731 19:45:44.705092  139529 build_images.go:134] failed building to: 
I0731 19:45:44.705129  139529 main.go:141] libmachine: Making call to close driver server
I0731 19:45:44.705144  139529 main.go:141] libmachine: (functional-904202) Calling .Close
I0731 19:45:44.705556  139529 main.go:141] libmachine: (functional-904202) DBG | Closing plugin on server side
I0731 19:45:44.705608  139529 main.go:141] libmachine: Successfully made call to close driver server
I0731 19:45:44.705620  139529 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 19:45:44.705633  139529 main.go:141] libmachine: Making call to close driver server
I0731 19:45:44.705646  139529 main.go:141] libmachine: (functional-904202) Calling .Close
I0731 19:45:44.705854  139529 main.go:141] libmachine: Successfully made call to close driver server
I0731 19:45:44.705872  139529 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-904202 image ls: (2.248343131s)
functional_test.go:442: expected "localhost/my-image:functional-904202" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (9.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 node stop m02 -v=7 --alsologtostderr
E0731 19:52:34.578006  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
E0731 19:52:53.668352  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-235073 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.473746586s)

                                                
                                                
-- stdout --
	* Stopping node "ha-235073-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 19:51:52.208888  144143 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:51:52.209032  144143 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:51:52.209042  144143 out.go:304] Setting ErrFile to fd 2...
	I0731 19:51:52.209047  144143 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:51:52.209220  144143 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 19:51:52.209990  144143 mustload.go:65] Loading cluster: ha-235073
	I0731 19:51:52.210984  144143 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:51:52.211015  144143 stop.go:39] StopHost: ha-235073-m02
	I0731 19:51:52.211532  144143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:51:52.211601  144143 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:51:52.228361  144143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38875
	I0731 19:51:52.228862  144143 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:51:52.229442  144143 main.go:141] libmachine: Using API Version  1
	I0731 19:51:52.229473  144143 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:51:52.229858  144143 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:51:52.232018  144143 out.go:177] * Stopping node "ha-235073-m02"  ...
	I0731 19:51:52.233743  144143 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0731 19:51:52.233773  144143 main.go:141] libmachine: (ha-235073-m02) Calling .DriverName
	I0731 19:51:52.234029  144143 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0731 19:51:52.234058  144143 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHHostname
	I0731 19:51:52.237006  144143 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:51:52.237558  144143 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:51:52.237639  144143 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:51:52.237829  144143 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHPort
	I0731 19:51:52.237999  144143 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:51:52.238160  144143 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHUsername
	I0731 19:51:52.238341  144143 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02/id_rsa Username:docker}
	I0731 19:51:52.325503  144143 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0731 19:51:52.380616  144143 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0731 19:51:52.436876  144143 main.go:141] libmachine: Stopping "ha-235073-m02"...
	I0731 19:51:52.436898  144143 main.go:141] libmachine: (ha-235073-m02) Calling .GetState
	I0731 19:51:52.438616  144143 main.go:141] libmachine: (ha-235073-m02) Calling .Stop
	I0731 19:51:52.442628  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 0/120
	I0731 19:51:53.444099  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 1/120
	I0731 19:51:54.445351  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 2/120
	I0731 19:51:55.447675  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 3/120
	I0731 19:51:56.449135  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 4/120
	I0731 19:51:57.450458  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 5/120
	I0731 19:51:58.451739  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 6/120
	I0731 19:51:59.453175  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 7/120
	I0731 19:52:00.454567  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 8/120
	I0731 19:52:01.455915  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 9/120
	I0731 19:52:02.458028  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 10/120
	I0731 19:52:03.460086  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 11/120
	I0731 19:52:04.462072  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 12/120
	I0731 19:52:05.463776  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 13/120
	I0731 19:52:06.465316  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 14/120
	I0731 19:52:07.467229  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 15/120
	I0731 19:52:08.468521  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 16/120
	I0731 19:52:09.470606  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 17/120
	I0731 19:52:10.472016  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 18/120
	I0731 19:52:11.473494  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 19/120
	I0731 19:52:12.475528  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 20/120
	I0731 19:52:13.476923  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 21/120
	I0731 19:52:14.478289  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 22/120
	I0731 19:52:15.480132  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 23/120
	I0731 19:52:16.481502  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 24/120
	I0731 19:52:17.483411  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 25/120
	I0731 19:52:18.484791  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 26/120
	I0731 19:52:19.486288  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 27/120
	I0731 19:52:20.487851  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 28/120
	I0731 19:52:21.489316  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 29/120
	I0731 19:52:22.491444  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 30/120
	I0731 19:52:23.492805  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 31/120
	I0731 19:52:24.495130  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 32/120
	I0731 19:52:25.496754  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 33/120
	I0731 19:52:26.498784  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 34/120
	I0731 19:52:27.500309  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 35/120
	I0731 19:52:28.502641  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 36/120
	I0731 19:52:29.504364  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 37/120
	I0731 19:52:30.505824  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 38/120
	I0731 19:52:31.507662  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 39/120
	I0731 19:52:32.508867  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 40/120
	I0731 19:52:33.510293  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 41/120
	I0731 19:52:34.511593  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 42/120
	I0731 19:52:35.512844  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 43/120
	I0731 19:52:36.514065  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 44/120
	I0731 19:52:37.515945  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 45/120
	I0731 19:52:38.517476  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 46/120
	I0731 19:52:39.518753  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 47/120
	I0731 19:52:40.520088  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 48/120
	I0731 19:52:41.521409  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 49/120
	I0731 19:52:42.523379  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 50/120
	I0731 19:52:43.524614  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 51/120
	I0731 19:52:44.525999  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 52/120
	I0731 19:52:45.527459  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 53/120
	I0731 19:52:46.529043  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 54/120
	I0731 19:52:47.531069  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 55/120
	I0731 19:52:48.532368  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 56/120
	I0731 19:52:49.533755  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 57/120
	I0731 19:52:50.535893  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 58/120
	I0731 19:52:51.537360  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 59/120
	I0731 19:52:52.539466  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 60/120
	I0731 19:52:53.540918  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 61/120
	I0731 19:52:54.542241  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 62/120
	I0731 19:52:55.543893  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 63/120
	I0731 19:52:56.545216  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 64/120
	I0731 19:52:57.546701  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 65/120
	I0731 19:52:58.549037  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 66/120
	I0731 19:52:59.550158  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 67/120
	I0731 19:53:00.551543  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 68/120
	I0731 19:53:01.552840  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 69/120
	I0731 19:53:02.554899  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 70/120
	I0731 19:53:03.556612  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 71/120
	I0731 19:53:04.558026  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 72/120
	I0731 19:53:05.559861  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 73/120
	I0731 19:53:06.561212  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 74/120
	I0731 19:53:07.563122  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 75/120
	I0731 19:53:08.564699  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 76/120
	I0731 19:53:09.566086  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 77/120
	I0731 19:53:10.567859  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 78/120
	I0731 19:53:11.569300  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 79/120
	I0731 19:53:12.570899  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 80/120
	I0731 19:53:13.572271  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 81/120
	I0731 19:53:14.573715  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 82/120
	I0731 19:53:15.575735  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 83/120
	I0731 19:53:16.577365  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 84/120
	I0731 19:53:17.579278  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 85/120
	I0731 19:53:18.580753  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 86/120
	I0731 19:53:19.582022  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 87/120
	I0731 19:53:20.583364  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 88/120
	I0731 19:53:21.584963  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 89/120
	I0731 19:53:22.587082  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 90/120
	I0731 19:53:23.588660  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 91/120
	I0731 19:53:24.590969  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 92/120
	I0731 19:53:25.592899  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 93/120
	I0731 19:53:26.594650  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 94/120
	I0731 19:53:27.596578  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 95/120
	I0731 19:53:28.597860  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 96/120
	I0731 19:53:29.599176  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 97/120
	I0731 19:53:30.600473  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 98/120
	I0731 19:53:31.601692  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 99/120
	I0731 19:53:32.603677  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 100/120
	I0731 19:53:33.605748  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 101/120
	I0731 19:53:34.607425  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 102/120
	I0731 19:53:35.608765  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 103/120
	I0731 19:53:36.610122  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 104/120
	I0731 19:53:37.611945  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 105/120
	I0731 19:53:38.613630  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 106/120
	I0731 19:53:39.615741  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 107/120
	I0731 19:53:40.616926  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 108/120
	I0731 19:53:41.618606  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 109/120
	I0731 19:53:42.620586  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 110/120
	I0731 19:53:43.621931  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 111/120
	I0731 19:53:44.624041  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 112/120
	I0731 19:53:45.626321  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 113/120
	I0731 19:53:46.627736  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 114/120
	I0731 19:53:47.629373  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 115/120
	I0731 19:53:48.631434  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 116/120
	I0731 19:53:49.632802  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 117/120
	I0731 19:53:50.634284  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 118/120
	I0731 19:53:51.636851  144143 main.go:141] libmachine: (ha-235073-m02) Waiting for machine to stop 119/120
	I0731 19:53:52.638189  144143 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0731 19:53:52.638333  144143 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-235073 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-235073 status -v=7 --alsologtostderr: exit status 3 (19.154913955s)

                                                
                                                
-- stdout --
	ha-235073
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-235073-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-235073-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-235073-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 19:53:52.683597  144588 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:53:52.683910  144588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:53:52.683924  144588 out.go:304] Setting ErrFile to fd 2...
	I0731 19:53:52.683930  144588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:53:52.684102  144588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 19:53:52.684262  144588 out.go:298] Setting JSON to false
	I0731 19:53:52.684288  144588 mustload.go:65] Loading cluster: ha-235073
	I0731 19:53:52.684420  144588 notify.go:220] Checking for updates...
	I0731 19:53:52.684653  144588 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:53:52.684670  144588 status.go:255] checking status of ha-235073 ...
	I0731 19:53:52.685055  144588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:53:52.685114  144588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:53:52.704857  144588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46607
	I0731 19:53:52.705321  144588 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:53:52.705955  144588 main.go:141] libmachine: Using API Version  1
	I0731 19:53:52.705982  144588 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:53:52.706460  144588 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:53:52.706670  144588 main.go:141] libmachine: (ha-235073) Calling .GetState
	I0731 19:53:52.708442  144588 status.go:330] ha-235073 host status = "Running" (err=<nil>)
	I0731 19:53:52.708469  144588 host.go:66] Checking if "ha-235073" exists ...
	I0731 19:53:52.708858  144588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:53:52.708896  144588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:53:52.723229  144588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36049
	I0731 19:53:52.723691  144588 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:53:52.724123  144588 main.go:141] libmachine: Using API Version  1
	I0731 19:53:52.724145  144588 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:53:52.724441  144588 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:53:52.724668  144588 main.go:141] libmachine: (ha-235073) Calling .GetIP
	I0731 19:53:52.727408  144588 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:53:52.727799  144588 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:53:52.727826  144588 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:53:52.727968  144588 host.go:66] Checking if "ha-235073" exists ...
	I0731 19:53:52.728244  144588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:53:52.728304  144588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:53:52.743024  144588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45187
	I0731 19:53:52.743379  144588 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:53:52.743830  144588 main.go:141] libmachine: Using API Version  1
	I0731 19:53:52.743857  144588 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:53:52.744154  144588 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:53:52.744334  144588 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:53:52.744532  144588 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:53:52.744558  144588 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:53:52.746825  144588 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:53:52.747229  144588 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:53:52.747261  144588 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:53:52.747398  144588 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:53:52.747575  144588 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:53:52.747733  144588 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:53:52.747853  144588 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:53:52.826192  144588 ssh_runner.go:195] Run: systemctl --version
	I0731 19:53:52.832804  144588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:53:52.850146  144588 kubeconfig.go:125] found "ha-235073" server: "https://192.168.39.254:8443"
	I0731 19:53:52.850187  144588 api_server.go:166] Checking apiserver status ...
	I0731 19:53:52.850235  144588 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:53:52.865533  144588 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup
	W0731 19:53:52.874967  144588 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 19:53:52.875019  144588 ssh_runner.go:195] Run: ls
	I0731 19:53:52.879363  144588 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 19:53:52.883647  144588 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 19:53:52.883668  144588 status.go:422] ha-235073 apiserver status = Running (err=<nil>)
	I0731 19:53:52.883678  144588 status.go:257] ha-235073 status: &{Name:ha-235073 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 19:53:52.883693  144588 status.go:255] checking status of ha-235073-m02 ...
	I0731 19:53:52.883987  144588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:53:52.884018  144588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:53:52.899844  144588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45209
	I0731 19:53:52.900277  144588 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:53:52.900763  144588 main.go:141] libmachine: Using API Version  1
	I0731 19:53:52.900787  144588 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:53:52.901149  144588 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:53:52.901326  144588 main.go:141] libmachine: (ha-235073-m02) Calling .GetState
	I0731 19:53:52.903154  144588 status.go:330] ha-235073-m02 host status = "Running" (err=<nil>)
	I0731 19:53:52.903171  144588 host.go:66] Checking if "ha-235073-m02" exists ...
	I0731 19:53:52.903473  144588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:53:52.903517  144588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:53:52.919446  144588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42823
	I0731 19:53:52.919841  144588 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:53:52.920261  144588 main.go:141] libmachine: Using API Version  1
	I0731 19:53:52.920284  144588 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:53:52.920603  144588 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:53:52.920856  144588 main.go:141] libmachine: (ha-235073-m02) Calling .GetIP
	I0731 19:53:52.924205  144588 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:53:52.924691  144588 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:53:52.924718  144588 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:53:52.924891  144588 host.go:66] Checking if "ha-235073-m02" exists ...
	I0731 19:53:52.925320  144588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:53:52.925393  144588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:53:52.940424  144588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37285
	I0731 19:53:52.940974  144588 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:53:52.941492  144588 main.go:141] libmachine: Using API Version  1
	I0731 19:53:52.941518  144588 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:53:52.941967  144588 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:53:52.942217  144588 main.go:141] libmachine: (ha-235073-m02) Calling .DriverName
	I0731 19:53:52.942413  144588 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:53:52.942438  144588 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHHostname
	I0731 19:53:52.945529  144588 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:53:52.945945  144588 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:53:52.945984  144588 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:53:52.946127  144588 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHPort
	I0731 19:53:52.946289  144588 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:53:52.946472  144588 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHUsername
	I0731 19:53:52.946641  144588 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02/id_rsa Username:docker}
	W0731 19:54:11.429570  144588 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.102:22: connect: no route to host
	W0731 19:54:11.429713  144588 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	E0731 19:54:11.429734  144588 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	I0731 19:54:11.429742  144588 status.go:257] ha-235073-m02 status: &{Name:ha-235073-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 19:54:11.429760  144588 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	I0731 19:54:11.429767  144588 status.go:255] checking status of ha-235073-m03 ...
	I0731 19:54:11.430067  144588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:11.430108  144588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:11.445961  144588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35269
	I0731 19:54:11.446452  144588 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:11.446941  144588 main.go:141] libmachine: Using API Version  1
	I0731 19:54:11.446964  144588 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:11.447313  144588 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:11.447515  144588 main.go:141] libmachine: (ha-235073-m03) Calling .GetState
	I0731 19:54:11.449157  144588 status.go:330] ha-235073-m03 host status = "Running" (err=<nil>)
	I0731 19:54:11.449176  144588 host.go:66] Checking if "ha-235073-m03" exists ...
	I0731 19:54:11.449558  144588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:11.449612  144588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:11.463853  144588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36973
	I0731 19:54:11.464236  144588 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:11.464723  144588 main.go:141] libmachine: Using API Version  1
	I0731 19:54:11.464744  144588 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:11.465035  144588 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:11.465278  144588 main.go:141] libmachine: (ha-235073-m03) Calling .GetIP
	I0731 19:54:11.468004  144588 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:54:11.468425  144588 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:54:11.468454  144588 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:54:11.468524  144588 host.go:66] Checking if "ha-235073-m03" exists ...
	I0731 19:54:11.468830  144588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:11.468863  144588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:11.484079  144588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35811
	I0731 19:54:11.484505  144588 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:11.484967  144588 main.go:141] libmachine: Using API Version  1
	I0731 19:54:11.484989  144588 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:11.485293  144588 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:11.485486  144588 main.go:141] libmachine: (ha-235073-m03) Calling .DriverName
	I0731 19:54:11.485664  144588 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:54:11.485683  144588 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHHostname
	I0731 19:54:11.488204  144588 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:54:11.488591  144588 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:54:11.488622  144588 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:54:11.488852  144588 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHPort
	I0731 19:54:11.489028  144588 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:54:11.489220  144588 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHUsername
	I0731 19:54:11.489363  144588 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/id_rsa Username:docker}
	I0731 19:54:11.579228  144588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:54:11.596907  144588 kubeconfig.go:125] found "ha-235073" server: "https://192.168.39.254:8443"
	I0731 19:54:11.596937  144588 api_server.go:166] Checking apiserver status ...
	I0731 19:54:11.596969  144588 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:54:11.612601  144588 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup
	W0731 19:54:11.621947  144588 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 19:54:11.622003  144588 ssh_runner.go:195] Run: ls
	I0731 19:54:11.626534  144588 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 19:54:11.632781  144588 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 19:54:11.632802  144588 status.go:422] ha-235073-m03 apiserver status = Running (err=<nil>)
	I0731 19:54:11.632811  144588 status.go:257] ha-235073-m03 status: &{Name:ha-235073-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 19:54:11.632824  144588 status.go:255] checking status of ha-235073-m04 ...
	I0731 19:54:11.633106  144588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:11.633140  144588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:11.648011  144588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41691
	I0731 19:54:11.648433  144588 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:11.648907  144588 main.go:141] libmachine: Using API Version  1
	I0731 19:54:11.648934  144588 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:11.649239  144588 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:11.649418  144588 main.go:141] libmachine: (ha-235073-m04) Calling .GetState
	I0731 19:54:11.650886  144588 status.go:330] ha-235073-m04 host status = "Running" (err=<nil>)
	I0731 19:54:11.650903  144588 host.go:66] Checking if "ha-235073-m04" exists ...
	I0731 19:54:11.651166  144588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:11.651206  144588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:11.665488  144588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35895
	I0731 19:54:11.665874  144588 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:11.666314  144588 main.go:141] libmachine: Using API Version  1
	I0731 19:54:11.666333  144588 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:11.666642  144588 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:11.666845  144588 main.go:141] libmachine: (ha-235073-m04) Calling .GetIP
	I0731 19:54:11.669282  144588 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:54:11.669656  144588 main.go:141] libmachine: (ha-235073-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:7d:83", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:50:58 +0000 UTC Type:0 Mac:52:54:00:cc:7d:83 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-235073-m04 Clientid:01:52:54:00:cc:7d:83}
	I0731 19:54:11.669690  144588 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined IP address 192.168.39.62 and MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:54:11.669832  144588 host.go:66] Checking if "ha-235073-m04" exists ...
	I0731 19:54:11.670099  144588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:11.670133  144588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:11.685159  144588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45717
	I0731 19:54:11.685600  144588 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:11.686073  144588 main.go:141] libmachine: Using API Version  1
	I0731 19:54:11.686089  144588 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:11.686345  144588 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:11.686526  144588 main.go:141] libmachine: (ha-235073-m04) Calling .DriverName
	I0731 19:54:11.686692  144588 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:54:11.686715  144588 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHHostname
	I0731 19:54:11.689645  144588 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:54:11.690042  144588 main.go:141] libmachine: (ha-235073-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:7d:83", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:50:58 +0000 UTC Type:0 Mac:52:54:00:cc:7d:83 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-235073-m04 Clientid:01:52:54:00:cc:7d:83}
	I0731 19:54:11.690069  144588 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined IP address 192.168.39.62 and MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:54:11.690185  144588 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHPort
	I0731 19:54:11.690357  144588 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHKeyPath
	I0731 19:54:11.690493  144588 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHUsername
	I0731 19:54:11.690646  144588 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m04/id_rsa Username:docker}
	I0731 19:54:11.774038  144588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:54:11.792773  144588 status.go:257] ha-235073-m04 status: &{Name:ha-235073-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-235073 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-235073 -n ha-235073
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-235073 logs -n 25: (1.435026823s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-235073 cp ha-235073-m03:/home/docker/cp-test.txt                              | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3796763680/001/cp-test_ha-235073-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n                                                                 | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-235073 cp ha-235073-m03:/home/docker/cp-test.txt                              | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073:/home/docker/cp-test_ha-235073-m03_ha-235073.txt                       |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n                                                                 | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n ha-235073 sudo cat                                              | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | /home/docker/cp-test_ha-235073-m03_ha-235073.txt                                 |           |         |         |                     |                     |
	| cp      | ha-235073 cp ha-235073-m03:/home/docker/cp-test.txt                              | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m02:/home/docker/cp-test_ha-235073-m03_ha-235073-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n                                                                 | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n ha-235073-m02 sudo cat                                          | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | /home/docker/cp-test_ha-235073-m03_ha-235073-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-235073 cp ha-235073-m03:/home/docker/cp-test.txt                              | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m04:/home/docker/cp-test_ha-235073-m03_ha-235073-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n                                                                 | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n ha-235073-m04 sudo cat                                          | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | /home/docker/cp-test_ha-235073-m03_ha-235073-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-235073 cp testdata/cp-test.txt                                                | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n                                                                 | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-235073 cp ha-235073-m04:/home/docker/cp-test.txt                              | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3796763680/001/cp-test_ha-235073-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n                                                                 | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-235073 cp ha-235073-m04:/home/docker/cp-test.txt                              | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073:/home/docker/cp-test_ha-235073-m04_ha-235073.txt                       |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n                                                                 | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n ha-235073 sudo cat                                              | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | /home/docker/cp-test_ha-235073-m04_ha-235073.txt                                 |           |         |         |                     |                     |
	| cp      | ha-235073 cp ha-235073-m04:/home/docker/cp-test.txt                              | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m02:/home/docker/cp-test_ha-235073-m04_ha-235073-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n                                                                 | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n ha-235073-m02 sudo cat                                          | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | /home/docker/cp-test_ha-235073-m04_ha-235073-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-235073 cp ha-235073-m04:/home/docker/cp-test.txt                              | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m03:/home/docker/cp-test_ha-235073-m04_ha-235073-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n                                                                 | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n ha-235073-m03 sudo cat                                          | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | /home/docker/cp-test_ha-235073-m04_ha-235073-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-235073 node stop m02 -v=7                                                     | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 19:45:58
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 19:45:58.226009  139843 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:45:58.226125  139843 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:45:58.226135  139843 out.go:304] Setting ErrFile to fd 2...
	I0731 19:45:58.226139  139843 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:45:58.226314  139843 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 19:45:58.226897  139843 out.go:298] Setting JSON to false
	I0731 19:45:58.228322  139843 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5294,"bootTime":1722449864,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 19:45:58.228583  139843 start.go:139] virtualization: kvm guest
	I0731 19:45:58.230861  139843 out.go:177] * [ha-235073] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 19:45:58.232284  139843 notify.go:220] Checking for updates...
	I0731 19:45:58.232346  139843 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 19:45:58.233738  139843 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 19:45:58.235009  139843 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 19:45:58.236378  139843 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 19:45:58.237754  139843 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 19:45:58.239041  139843 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 19:45:58.240384  139843 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 19:45:58.274375  139843 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 19:45:58.275858  139843 start.go:297] selected driver: kvm2
	I0731 19:45:58.275868  139843 start.go:901] validating driver "kvm2" against <nil>
	I0731 19:45:58.275878  139843 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 19:45:58.276618  139843 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:45:58.276707  139843 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-121704/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 19:45:58.291788  139843 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 19:45:58.291834  139843 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 19:45:58.292047  139843 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 19:45:58.292113  139843 cni.go:84] Creating CNI manager for ""
	I0731 19:45:58.292125  139843 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0731 19:45:58.292132  139843 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 19:45:58.292194  139843 start.go:340] cluster config:
	{Name:ha-235073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-235073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0731 19:45:58.292286  139843 iso.go:125] acquiring lock: {Name:mk69fc0fe37180b19ce91307b5e12ab2d0bd69fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:45:58.294032  139843 out.go:177] * Starting "ha-235073" primary control-plane node in "ha-235073" cluster
	I0731 19:45:58.295338  139843 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 19:45:58.295370  139843 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 19:45:58.295385  139843 cache.go:56] Caching tarball of preloaded images
	I0731 19:45:58.295472  139843 preload.go:172] Found /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 19:45:58.295483  139843 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 19:45:58.295783  139843 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/config.json ...
	I0731 19:45:58.295802  139843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/config.json: {Name:mk3eeddeb246ecc6b03da1587de41e99a8e651ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:45:58.295924  139843 start.go:360] acquireMachinesLock for ha-235073: {Name:mk0ee20c9dba367bb5e62f6affdfe6f589095d2a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 19:45:58.295951  139843 start.go:364] duration metric: took 15.527µs to acquireMachinesLock for "ha-235073"
	I0731 19:45:58.295967  139843 start.go:93] Provisioning new machine with config: &{Name:ha-235073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-235073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 19:45:58.296020  139843 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 19:45:58.297644  139843 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 19:45:58.297774  139843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:45:58.297813  139843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:45:58.311988  139843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40361
	I0731 19:45:58.312498  139843 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:45:58.313061  139843 main.go:141] libmachine: Using API Version  1
	I0731 19:45:58.313082  139843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:45:58.313469  139843 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:45:58.313682  139843 main.go:141] libmachine: (ha-235073) Calling .GetMachineName
	I0731 19:45:58.313804  139843 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:45:58.314007  139843 start.go:159] libmachine.API.Create for "ha-235073" (driver="kvm2")
	I0731 19:45:58.314037  139843 client.go:168] LocalClient.Create starting
	I0731 19:45:58.314073  139843 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem
	I0731 19:45:58.314107  139843 main.go:141] libmachine: Decoding PEM data...
	I0731 19:45:58.314123  139843 main.go:141] libmachine: Parsing certificate...
	I0731 19:45:58.314190  139843 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem
	I0731 19:45:58.314207  139843 main.go:141] libmachine: Decoding PEM data...
	I0731 19:45:58.314220  139843 main.go:141] libmachine: Parsing certificate...
	I0731 19:45:58.314234  139843 main.go:141] libmachine: Running pre-create checks...
	I0731 19:45:58.314244  139843 main.go:141] libmachine: (ha-235073) Calling .PreCreateCheck
	I0731 19:45:58.314573  139843 main.go:141] libmachine: (ha-235073) Calling .GetConfigRaw
	I0731 19:45:58.314934  139843 main.go:141] libmachine: Creating machine...
	I0731 19:45:58.314948  139843 main.go:141] libmachine: (ha-235073) Calling .Create
	I0731 19:45:58.315093  139843 main.go:141] libmachine: (ha-235073) Creating KVM machine...
	I0731 19:45:58.316257  139843 main.go:141] libmachine: (ha-235073) DBG | found existing default KVM network
	I0731 19:45:58.316963  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:45:58.316827  139866 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f330}
	I0731 19:45:58.317006  139843 main.go:141] libmachine: (ha-235073) DBG | created network xml: 
	I0731 19:45:58.317030  139843 main.go:141] libmachine: (ha-235073) DBG | <network>
	I0731 19:45:58.317043  139843 main.go:141] libmachine: (ha-235073) DBG |   <name>mk-ha-235073</name>
	I0731 19:45:58.317052  139843 main.go:141] libmachine: (ha-235073) DBG |   <dns enable='no'/>
	I0731 19:45:58.317063  139843 main.go:141] libmachine: (ha-235073) DBG |   
	I0731 19:45:58.317073  139843 main.go:141] libmachine: (ha-235073) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0731 19:45:58.317082  139843 main.go:141] libmachine: (ha-235073) DBG |     <dhcp>
	I0731 19:45:58.317093  139843 main.go:141] libmachine: (ha-235073) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0731 19:45:58.317120  139843 main.go:141] libmachine: (ha-235073) DBG |     </dhcp>
	I0731 19:45:58.317153  139843 main.go:141] libmachine: (ha-235073) DBG |   </ip>
	I0731 19:45:58.317165  139843 main.go:141] libmachine: (ha-235073) DBG |   
	I0731 19:45:58.317172  139843 main.go:141] libmachine: (ha-235073) DBG | </network>
	I0731 19:45:58.317179  139843 main.go:141] libmachine: (ha-235073) DBG | 
	I0731 19:45:58.321974  139843 main.go:141] libmachine: (ha-235073) DBG | trying to create private KVM network mk-ha-235073 192.168.39.0/24...
	I0731 19:45:58.386110  139843 main.go:141] libmachine: (ha-235073) DBG | private KVM network mk-ha-235073 192.168.39.0/24 created
	I0731 19:45:58.386149  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:45:58.386078  139866 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 19:45:58.386163  139843 main.go:141] libmachine: (ha-235073) Setting up store path in /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073 ...
	I0731 19:45:58.386182  139843 main.go:141] libmachine: (ha-235073) Building disk image from file:///home/jenkins/minikube-integration/19355-121704/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0731 19:45:58.386316  139843 main.go:141] libmachine: (ha-235073) Downloading /home/jenkins/minikube-integration/19355-121704/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19355-121704/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0731 19:45:58.645435  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:45:58.645280  139866 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa...
	I0731 19:45:58.831858  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:45:58.831697  139866 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/ha-235073.rawdisk...
	I0731 19:45:58.831883  139843 main.go:141] libmachine: (ha-235073) DBG | Writing magic tar header
	I0731 19:45:58.831932  139843 main.go:141] libmachine: (ha-235073) DBG | Writing SSH key tar header
	I0731 19:45:58.831970  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:45:58.831844  139866 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073 ...
	I0731 19:45:58.831987  139843 main.go:141] libmachine: (ha-235073) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073
	I0731 19:45:58.832044  139843 main.go:141] libmachine: (ha-235073) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073 (perms=drwx------)
	I0731 19:45:58.832064  139843 main.go:141] libmachine: (ha-235073) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704/.minikube/machines
	I0731 19:45:58.832072  139843 main.go:141] libmachine: (ha-235073) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704/.minikube/machines (perms=drwxr-xr-x)
	I0731 19:45:58.832081  139843 main.go:141] libmachine: (ha-235073) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 19:45:58.832092  139843 main.go:141] libmachine: (ha-235073) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704
	I0731 19:45:58.832101  139843 main.go:141] libmachine: (ha-235073) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704/.minikube (perms=drwxr-xr-x)
	I0731 19:45:58.832107  139843 main.go:141] libmachine: (ha-235073) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 19:45:58.832114  139843 main.go:141] libmachine: (ha-235073) DBG | Checking permissions on dir: /home/jenkins
	I0731 19:45:58.832121  139843 main.go:141] libmachine: (ha-235073) DBG | Checking permissions on dir: /home
	I0731 19:45:58.832130  139843 main.go:141] libmachine: (ha-235073) DBG | Skipping /home - not owner
	I0731 19:45:58.832139  139843 main.go:141] libmachine: (ha-235073) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704 (perms=drwxrwxr-x)
	I0731 19:45:58.832146  139843 main.go:141] libmachine: (ha-235073) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 19:45:58.832153  139843 main.go:141] libmachine: (ha-235073) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 19:45:58.832160  139843 main.go:141] libmachine: (ha-235073) Creating domain...
	I0731 19:45:58.833455  139843 main.go:141] libmachine: (ha-235073) define libvirt domain using xml: 
	I0731 19:45:58.833482  139843 main.go:141] libmachine: (ha-235073) <domain type='kvm'>
	I0731 19:45:58.833492  139843 main.go:141] libmachine: (ha-235073)   <name>ha-235073</name>
	I0731 19:45:58.833503  139843 main.go:141] libmachine: (ha-235073)   <memory unit='MiB'>2200</memory>
	I0731 19:45:58.833512  139843 main.go:141] libmachine: (ha-235073)   <vcpu>2</vcpu>
	I0731 19:45:58.833519  139843 main.go:141] libmachine: (ha-235073)   <features>
	I0731 19:45:58.833527  139843 main.go:141] libmachine: (ha-235073)     <acpi/>
	I0731 19:45:58.833534  139843 main.go:141] libmachine: (ha-235073)     <apic/>
	I0731 19:45:58.833542  139843 main.go:141] libmachine: (ha-235073)     <pae/>
	I0731 19:45:58.833559  139843 main.go:141] libmachine: (ha-235073)     
	I0731 19:45:58.833567  139843 main.go:141] libmachine: (ha-235073)   </features>
	I0731 19:45:58.833576  139843 main.go:141] libmachine: (ha-235073)   <cpu mode='host-passthrough'>
	I0731 19:45:58.833594  139843 main.go:141] libmachine: (ha-235073)   
	I0731 19:45:58.833617  139843 main.go:141] libmachine: (ha-235073)   </cpu>
	I0731 19:45:58.833626  139843 main.go:141] libmachine: (ha-235073)   <os>
	I0731 19:45:58.833637  139843 main.go:141] libmachine: (ha-235073)     <type>hvm</type>
	I0731 19:45:58.833649  139843 main.go:141] libmachine: (ha-235073)     <boot dev='cdrom'/>
	I0731 19:45:58.833658  139843 main.go:141] libmachine: (ha-235073)     <boot dev='hd'/>
	I0731 19:45:58.833665  139843 main.go:141] libmachine: (ha-235073)     <bootmenu enable='no'/>
	I0731 19:45:58.833671  139843 main.go:141] libmachine: (ha-235073)   </os>
	I0731 19:45:58.833677  139843 main.go:141] libmachine: (ha-235073)   <devices>
	I0731 19:45:58.833685  139843 main.go:141] libmachine: (ha-235073)     <disk type='file' device='cdrom'>
	I0731 19:45:58.833735  139843 main.go:141] libmachine: (ha-235073)       <source file='/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/boot2docker.iso'/>
	I0731 19:45:58.833759  139843 main.go:141] libmachine: (ha-235073)       <target dev='hdc' bus='scsi'/>
	I0731 19:45:58.833773  139843 main.go:141] libmachine: (ha-235073)       <readonly/>
	I0731 19:45:58.833780  139843 main.go:141] libmachine: (ha-235073)     </disk>
	I0731 19:45:58.833793  139843 main.go:141] libmachine: (ha-235073)     <disk type='file' device='disk'>
	I0731 19:45:58.833805  139843 main.go:141] libmachine: (ha-235073)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 19:45:58.833821  139843 main.go:141] libmachine: (ha-235073)       <source file='/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/ha-235073.rawdisk'/>
	I0731 19:45:58.833836  139843 main.go:141] libmachine: (ha-235073)       <target dev='hda' bus='virtio'/>
	I0731 19:45:58.833850  139843 main.go:141] libmachine: (ha-235073)     </disk>
	I0731 19:45:58.833860  139843 main.go:141] libmachine: (ha-235073)     <interface type='network'>
	I0731 19:45:58.833870  139843 main.go:141] libmachine: (ha-235073)       <source network='mk-ha-235073'/>
	I0731 19:45:58.833879  139843 main.go:141] libmachine: (ha-235073)       <model type='virtio'/>
	I0731 19:45:58.833886  139843 main.go:141] libmachine: (ha-235073)     </interface>
	I0731 19:45:58.833897  139843 main.go:141] libmachine: (ha-235073)     <interface type='network'>
	I0731 19:45:58.833919  139843 main.go:141] libmachine: (ha-235073)       <source network='default'/>
	I0731 19:45:58.833938  139843 main.go:141] libmachine: (ha-235073)       <model type='virtio'/>
	I0731 19:45:58.833950  139843 main.go:141] libmachine: (ha-235073)     </interface>
	I0731 19:45:58.833961  139843 main.go:141] libmachine: (ha-235073)     <serial type='pty'>
	I0731 19:45:58.833973  139843 main.go:141] libmachine: (ha-235073)       <target port='0'/>
	I0731 19:45:58.833983  139843 main.go:141] libmachine: (ha-235073)     </serial>
	I0731 19:45:58.834010  139843 main.go:141] libmachine: (ha-235073)     <console type='pty'>
	I0731 19:45:58.834027  139843 main.go:141] libmachine: (ha-235073)       <target type='serial' port='0'/>
	I0731 19:45:58.834039  139843 main.go:141] libmachine: (ha-235073)     </console>
	I0731 19:45:58.834047  139843 main.go:141] libmachine: (ha-235073)     <rng model='virtio'>
	I0731 19:45:58.834058  139843 main.go:141] libmachine: (ha-235073)       <backend model='random'>/dev/random</backend>
	I0731 19:45:58.834065  139843 main.go:141] libmachine: (ha-235073)     </rng>
	I0731 19:45:58.834070  139843 main.go:141] libmachine: (ha-235073)     
	I0731 19:45:58.834073  139843 main.go:141] libmachine: (ha-235073)     
	I0731 19:45:58.834080  139843 main.go:141] libmachine: (ha-235073)   </devices>
	I0731 19:45:58.834084  139843 main.go:141] libmachine: (ha-235073) </domain>
	I0731 19:45:58.834098  139843 main.go:141] libmachine: (ha-235073) 
	I0731 19:45:58.838172  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:91:35:40 in network default
	I0731 19:45:58.838688  139843 main.go:141] libmachine: (ha-235073) Ensuring networks are active...
	I0731 19:45:58.838705  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:45:58.839241  139843 main.go:141] libmachine: (ha-235073) Ensuring network default is active
	I0731 19:45:58.839524  139843 main.go:141] libmachine: (ha-235073) Ensuring network mk-ha-235073 is active
	I0731 19:45:58.839948  139843 main.go:141] libmachine: (ha-235073) Getting domain xml...
	I0731 19:45:58.840528  139843 main.go:141] libmachine: (ha-235073) Creating domain...
	I0731 19:46:00.011490  139843 main.go:141] libmachine: (ha-235073) Waiting to get IP...
	I0731 19:46:00.012197  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:00.012529  139843 main.go:141] libmachine: (ha-235073) DBG | unable to find current IP address of domain ha-235073 in network mk-ha-235073
	I0731 19:46:00.012585  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:46:00.012518  139866 retry.go:31] will retry after 274.611149ms: waiting for machine to come up
	I0731 19:46:00.288981  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:00.289468  139843 main.go:141] libmachine: (ha-235073) DBG | unable to find current IP address of domain ha-235073 in network mk-ha-235073
	I0731 19:46:00.289496  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:46:00.289418  139866 retry.go:31] will retry after 345.869467ms: waiting for machine to come up
	I0731 19:46:00.637093  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:00.637491  139843 main.go:141] libmachine: (ha-235073) DBG | unable to find current IP address of domain ha-235073 in network mk-ha-235073
	I0731 19:46:00.637519  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:46:00.637440  139866 retry.go:31] will retry after 369.988704ms: waiting for machine to come up
	I0731 19:46:01.008943  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:01.009344  139843 main.go:141] libmachine: (ha-235073) DBG | unable to find current IP address of domain ha-235073 in network mk-ha-235073
	I0731 19:46:01.009377  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:46:01.009289  139866 retry.go:31] will retry after 444.790632ms: waiting for machine to come up
	I0731 19:46:01.455488  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:01.455918  139843 main.go:141] libmachine: (ha-235073) DBG | unable to find current IP address of domain ha-235073 in network mk-ha-235073
	I0731 19:46:01.455936  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:46:01.455886  139866 retry.go:31] will retry after 571.934824ms: waiting for machine to come up
	I0731 19:46:02.029661  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:02.030102  139843 main.go:141] libmachine: (ha-235073) DBG | unable to find current IP address of domain ha-235073 in network mk-ha-235073
	I0731 19:46:02.030130  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:46:02.030055  139866 retry.go:31] will retry after 821.5719ms: waiting for machine to come up
	I0731 19:46:02.852842  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:02.853142  139843 main.go:141] libmachine: (ha-235073) DBG | unable to find current IP address of domain ha-235073 in network mk-ha-235073
	I0731 19:46:02.853174  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:46:02.853085  139866 retry.go:31] will retry after 1.057355998s: waiting for machine to come up
	I0731 19:46:03.911898  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:03.912296  139843 main.go:141] libmachine: (ha-235073) DBG | unable to find current IP address of domain ha-235073 in network mk-ha-235073
	I0731 19:46:03.912324  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:46:03.912239  139866 retry.go:31] will retry after 1.140982402s: waiting for machine to come up
	I0731 19:46:05.054709  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:05.055046  139843 main.go:141] libmachine: (ha-235073) DBG | unable to find current IP address of domain ha-235073 in network mk-ha-235073
	I0731 19:46:05.055068  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:46:05.055013  139866 retry.go:31] will retry after 1.25607749s: waiting for machine to come up
	I0731 19:46:06.313657  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:06.314062  139843 main.go:141] libmachine: (ha-235073) DBG | unable to find current IP address of domain ha-235073 in network mk-ha-235073
	I0731 19:46:06.314090  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:46:06.314011  139866 retry.go:31] will retry after 2.299194759s: waiting for machine to come up
	I0731 19:46:08.615051  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:08.615548  139843 main.go:141] libmachine: (ha-235073) DBG | unable to find current IP address of domain ha-235073 in network mk-ha-235073
	I0731 19:46:08.615578  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:46:08.615494  139866 retry.go:31] will retry after 2.831140976s: waiting for machine to come up
	I0731 19:46:11.450444  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:11.450885  139843 main.go:141] libmachine: (ha-235073) DBG | unable to find current IP address of domain ha-235073 in network mk-ha-235073
	I0731 19:46:11.450914  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:46:11.450838  139866 retry.go:31] will retry after 2.851660254s: waiting for machine to come up
	I0731 19:46:14.304380  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:14.304871  139843 main.go:141] libmachine: (ha-235073) DBG | unable to find current IP address of domain ha-235073 in network mk-ha-235073
	I0731 19:46:14.304894  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:46:14.304834  139866 retry.go:31] will retry after 3.780280162s: waiting for machine to come up
	I0731 19:46:18.086353  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:18.086858  139843 main.go:141] libmachine: (ha-235073) Found IP for machine: 192.168.39.146
	I0731 19:46:18.086880  139843 main.go:141] libmachine: (ha-235073) Reserving static IP address...
	I0731 19:46:18.086893  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has current primary IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:18.087230  139843 main.go:141] libmachine: (ha-235073) DBG | unable to find host DHCP lease matching {name: "ha-235073", mac: "52:54:00:81:60:31", ip: "192.168.39.146"} in network mk-ha-235073
	I0731 19:46:18.160399  139843 main.go:141] libmachine: (ha-235073) DBG | Getting to WaitForSSH function...
	I0731 19:46:18.160436  139843 main.go:141] libmachine: (ha-235073) Reserved static IP address: 192.168.39.146
	I0731 19:46:18.160451  139843 main.go:141] libmachine: (ha-235073) Waiting for SSH to be available...
	I0731 19:46:18.162832  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:18.163205  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:minikube Clientid:01:52:54:00:81:60:31}
	I0731 19:46:18.163238  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:18.163354  139843 main.go:141] libmachine: (ha-235073) DBG | Using SSH client type: external
	I0731 19:46:18.163372  139843 main.go:141] libmachine: (ha-235073) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa (-rw-------)
	I0731 19:46:18.163405  139843 main.go:141] libmachine: (ha-235073) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.146 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 19:46:18.163426  139843 main.go:141] libmachine: (ha-235073) DBG | About to run SSH command:
	I0731 19:46:18.163439  139843 main.go:141] libmachine: (ha-235073) DBG | exit 0
	I0731 19:46:18.285504  139843 main.go:141] libmachine: (ha-235073) DBG | SSH cmd err, output: <nil>: 
	I0731 19:46:18.285778  139843 main.go:141] libmachine: (ha-235073) KVM machine creation complete!
	I0731 19:46:18.286059  139843 main.go:141] libmachine: (ha-235073) Calling .GetConfigRaw
	I0731 19:46:18.286616  139843 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:46:18.286855  139843 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:46:18.287005  139843 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 19:46:18.287017  139843 main.go:141] libmachine: (ha-235073) Calling .GetState
	I0731 19:46:18.288490  139843 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 19:46:18.288504  139843 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 19:46:18.288517  139843 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 19:46:18.288525  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:46:18.290950  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:18.291370  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:18.291395  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:18.291544  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:46:18.291721  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:18.291919  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:18.292074  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:46:18.292231  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:46:18.292476  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0731 19:46:18.292491  139843 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 19:46:18.388762  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 19:46:18.388788  139843 main.go:141] libmachine: Detecting the provisioner...
	I0731 19:46:18.388795  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:46:18.391599  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:18.391950  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:18.391984  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:18.392146  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:46:18.392367  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:18.392543  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:18.392699  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:46:18.392858  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:46:18.393037  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0731 19:46:18.393051  139843 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 19:46:18.490368  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 19:46:18.490455  139843 main.go:141] libmachine: found compatible host: buildroot
	I0731 19:46:18.490463  139843 main.go:141] libmachine: Provisioning with buildroot...
	I0731 19:46:18.490470  139843 main.go:141] libmachine: (ha-235073) Calling .GetMachineName
	I0731 19:46:18.490790  139843 buildroot.go:166] provisioning hostname "ha-235073"
	I0731 19:46:18.490817  139843 main.go:141] libmachine: (ha-235073) Calling .GetMachineName
	I0731 19:46:18.490974  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:46:18.493867  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:18.494159  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:18.494186  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:18.494401  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:46:18.494609  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:18.494784  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:18.494912  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:46:18.495067  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:46:18.495288  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0731 19:46:18.495302  139843 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-235073 && echo "ha-235073" | sudo tee /etc/hostname
	I0731 19:46:18.607836  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-235073
	
	I0731 19:46:18.607873  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:46:18.610750  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:18.611100  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:18.611132  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:18.611270  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:46:18.611499  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:18.611662  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:18.611796  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:46:18.611949  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:46:18.612169  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0731 19:46:18.612196  139843 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-235073' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-235073/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-235073' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 19:46:18.718538  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 19:46:18.718583  139843 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 19:46:18.718606  139843 buildroot.go:174] setting up certificates
	I0731 19:46:18.718617  139843 provision.go:84] configureAuth start
	I0731 19:46:18.718626  139843 main.go:141] libmachine: (ha-235073) Calling .GetMachineName
	I0731 19:46:18.718956  139843 main.go:141] libmachine: (ha-235073) Calling .GetIP
	I0731 19:46:18.721716  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:18.722078  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:18.722116  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:18.722332  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:46:18.724853  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:18.725181  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:18.725211  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:18.725389  139843 provision.go:143] copyHostCerts
	I0731 19:46:18.725426  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 19:46:18.725469  139843 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 19:46:18.725486  139843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 19:46:18.725567  139843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 19:46:18.725676  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 19:46:18.725701  139843 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 19:46:18.725709  139843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 19:46:18.725748  139843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 19:46:18.725816  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 19:46:18.725840  139843 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 19:46:18.725845  139843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 19:46:18.725879  139843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 19:46:18.725959  139843 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.ha-235073 san=[127.0.0.1 192.168.39.146 ha-235073 localhost minikube]
	I0731 19:46:19.018788  139843 provision.go:177] copyRemoteCerts
	I0731 19:46:19.018860  139843 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 19:46:19.018891  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:46:19.021580  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.021860  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:19.021904  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.022018  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:46:19.022223  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:19.022424  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:46:19.022580  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:46:19.104173  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 19:46:19.104258  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 19:46:19.128347  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 19:46:19.128446  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0731 19:46:19.152561  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 19:46:19.152653  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 19:46:19.177423  139843 provision.go:87] duration metric: took 458.789911ms to configureAuth
	I0731 19:46:19.177460  139843 buildroot.go:189] setting minikube options for container-runtime
	I0731 19:46:19.177644  139843 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:46:19.177731  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:46:19.180417  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.180701  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:19.180723  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.180884  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:46:19.181101  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:19.181268  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:19.181413  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:46:19.181581  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:46:19.181749  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0731 19:46:19.181764  139843 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 19:46:19.444918  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 19:46:19.444960  139843 main.go:141] libmachine: Checking connection to Docker...
	I0731 19:46:19.444973  139843 main.go:141] libmachine: (ha-235073) Calling .GetURL
	I0731 19:46:19.446199  139843 main.go:141] libmachine: (ha-235073) DBG | Using libvirt version 6000000
	I0731 19:46:19.447983  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.448327  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:19.448359  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.448413  139843 main.go:141] libmachine: Docker is up and running!
	I0731 19:46:19.448424  139843 main.go:141] libmachine: Reticulating splines...
	I0731 19:46:19.448433  139843 client.go:171] duration metric: took 21.134389884s to LocalClient.Create
	I0731 19:46:19.448472  139843 start.go:167] duration metric: took 21.134465555s to libmachine.API.Create "ha-235073"
	I0731 19:46:19.448484  139843 start.go:293] postStartSetup for "ha-235073" (driver="kvm2")
	I0731 19:46:19.448496  139843 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 19:46:19.448521  139843 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:46:19.448782  139843 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 19:46:19.448805  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:46:19.450554  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.450860  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:19.450903  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.451018  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:46:19.451211  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:19.451379  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:46:19.451532  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:46:19.532377  139843 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 19:46:19.536707  139843 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 19:46:19.536732  139843 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 19:46:19.536788  139843 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 19:46:19.536857  139843 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 19:46:19.536868  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> /etc/ssl/certs/1288912.pem
	I0731 19:46:19.536958  139843 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 19:46:19.547066  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 19:46:19.570870  139843 start.go:296] duration metric: took 122.370251ms for postStartSetup
	I0731 19:46:19.570953  139843 main.go:141] libmachine: (ha-235073) Calling .GetConfigRaw
	I0731 19:46:19.571664  139843 main.go:141] libmachine: (ha-235073) Calling .GetIP
	I0731 19:46:19.574060  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.574413  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:19.574440  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.574599  139843 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/config.json ...
	I0731 19:46:19.574777  139843 start.go:128] duration metric: took 21.278745189s to createHost
	I0731 19:46:19.574799  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:46:19.576744  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.577001  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:19.577036  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.577205  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:46:19.577405  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:19.577604  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:19.577743  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:46:19.577922  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:46:19.578083  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0731 19:46:19.578095  139843 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 19:46:19.674714  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722455179.640290588
	
	I0731 19:46:19.674754  139843 fix.go:216] guest clock: 1722455179.640290588
	I0731 19:46:19.674762  139843 fix.go:229] Guest: 2024-07-31 19:46:19.640290588 +0000 UTC Remote: 2024-07-31 19:46:19.57478807 +0000 UTC m=+21.383718664 (delta=65.502518ms)
	I0731 19:46:19.674795  139843 fix.go:200] guest clock delta is within tolerance: 65.502518ms
	I0731 19:46:19.674804  139843 start.go:83] releasing machines lock for "ha-235073", held for 21.378844327s
	I0731 19:46:19.674825  139843 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:46:19.675108  139843 main.go:141] libmachine: (ha-235073) Calling .GetIP
	I0731 19:46:19.678117  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.678495  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:19.678523  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.678671  139843 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:46:19.679217  139843 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:46:19.679385  139843 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:46:19.679467  139843 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 19:46:19.679497  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:46:19.679625  139843 ssh_runner.go:195] Run: cat /version.json
	I0731 19:46:19.679650  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:46:19.682124  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.682320  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.682653  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:19.682689  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.682719  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:19.682719  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:46:19.682746  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.682820  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:46:19.682906  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:19.683023  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:19.683114  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:46:19.683177  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:46:19.683289  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:46:19.683332  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:46:19.778962  139843 ssh_runner.go:195] Run: systemctl --version
	I0731 19:46:19.785027  139843 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 19:46:19.947529  139843 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 19:46:19.954223  139843 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 19:46:19.954310  139843 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 19:46:19.970253  139843 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 19:46:19.970279  139843 start.go:495] detecting cgroup driver to use...
	I0731 19:46:19.970421  139843 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 19:46:19.986810  139843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 19:46:20.000725  139843 docker.go:217] disabling cri-docker service (if available) ...
	I0731 19:46:20.000788  139843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 19:46:20.014432  139843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 19:46:20.027667  139843 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 19:46:20.144356  139843 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 19:46:20.280021  139843 docker.go:233] disabling docker service ...
	I0731 19:46:20.280088  139843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 19:46:20.295165  139843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 19:46:20.309130  139843 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 19:46:20.437305  139843 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 19:46:20.547819  139843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 19:46:20.562190  139843 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 19:46:20.580796  139843 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 19:46:20.580861  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:46:20.591809  139843 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 19:46:20.591872  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:46:20.602731  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:46:20.613312  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:46:20.623950  139843 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 19:46:20.634837  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:46:20.645253  139843 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:46:20.662809  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:46:20.673628  139843 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 19:46:20.683315  139843 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 19:46:20.683381  139843 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 19:46:20.697196  139843 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 19:46:20.707078  139843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:46:20.818722  139843 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 19:46:20.961445  139843 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 19:46:20.961519  139843 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 19:46:20.966586  139843 start.go:563] Will wait 60s for crictl version
	I0731 19:46:20.966678  139843 ssh_runner.go:195] Run: which crictl
	I0731 19:46:20.970382  139843 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 19:46:21.006302  139843 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 19:46:21.006389  139843 ssh_runner.go:195] Run: crio --version
	I0731 19:46:21.034094  139843 ssh_runner.go:195] Run: crio --version
	I0731 19:46:21.062380  139843 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 19:46:21.063665  139843 main.go:141] libmachine: (ha-235073) Calling .GetIP
	I0731 19:46:21.066178  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:21.066535  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:21.066565  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:21.066791  139843 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 19:46:21.070835  139843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 19:46:21.083165  139843 kubeadm.go:883] updating cluster {Name:ha-235073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-235073 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 19:46:21.083268  139843 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 19:46:21.083308  139843 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 19:46:21.112818  139843 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 19:46:21.112908  139843 ssh_runner.go:195] Run: which lz4
	I0731 19:46:21.116693  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0731 19:46:21.116773  139843 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 19:46:21.120894  139843 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 19:46:21.120925  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 19:46:22.492408  139843 crio.go:462] duration metric: took 1.375652525s to copy over tarball
	I0731 19:46:22.492495  139843 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 19:46:24.583933  139843 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.091398881s)
	I0731 19:46:24.583967  139843 crio.go:469] duration metric: took 2.091524869s to extract the tarball
	I0731 19:46:24.583975  139843 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 19:46:24.621603  139843 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 19:46:24.669376  139843 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 19:46:24.669400  139843 cache_images.go:84] Images are preloaded, skipping loading
	I0731 19:46:24.669410  139843 kubeadm.go:934] updating node { 192.168.39.146 8443 v1.30.3 crio true true} ...
	I0731 19:46:24.669542  139843 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-235073 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-235073 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 19:46:24.669635  139843 ssh_runner.go:195] Run: crio config
	I0731 19:46:24.713852  139843 cni.go:84] Creating CNI manager for ""
	I0731 19:46:24.713876  139843 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 19:46:24.713889  139843 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 19:46:24.713920  139843 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.146 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-235073 NodeName:ha-235073 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.146"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.146 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 19:46:24.714093  139843 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.146
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-235073"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.146
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.146"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 19:46:24.714121  139843 kube-vip.go:115] generating kube-vip config ...
	I0731 19:46:24.714174  139843 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 19:46:24.731610  139843 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 19:46:24.731721  139843 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0731 19:46:24.731791  139843 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 19:46:24.741137  139843 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 19:46:24.741192  139843 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0731 19:46:24.750015  139843 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0731 19:46:24.765598  139843 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 19:46:24.781317  139843 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0731 19:46:24.797245  139843 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0731 19:46:24.813104  139843 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 19:46:24.816768  139843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 19:46:24.828262  139843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:46:24.941199  139843 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 19:46:24.957225  139843 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073 for IP: 192.168.39.146
	I0731 19:46:24.957251  139843 certs.go:194] generating shared ca certs ...
	I0731 19:46:24.957273  139843 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:46:24.957485  139843 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 19:46:24.957554  139843 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 19:46:24.957570  139843 certs.go:256] generating profile certs ...
	I0731 19:46:24.957666  139843 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/client.key
	I0731 19:46:24.957686  139843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/client.crt with IP's: []
	I0731 19:46:25.138659  139843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/client.crt ...
	I0731 19:46:25.138691  139843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/client.crt: {Name:mk8eeb47ca9173eddfd8196b7e593e298c83e50a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:46:25.138881  139843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/client.key ...
	I0731 19:46:25.138896  139843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/client.key: {Name:mkfa9697e2ebe61beb186a68c7c9645a0af9abc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:46:25.139002  139843 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key.82950da9
	I0731 19:46:25.139021  139843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt.82950da9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.146 192.168.39.254]
	I0731 19:46:25.217551  139843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt.82950da9 ...
	I0731 19:46:25.217584  139843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt.82950da9: {Name:mk11b1d3f3ac82a08a7990ea92b49f5707becbd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:46:25.217758  139843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key.82950da9 ...
	I0731 19:46:25.217777  139843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key.82950da9: {Name:mka383161a89784e9944aae91199cdf6fda371f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:46:25.217874  139843 certs.go:381] copying /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt.82950da9 -> /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt
	I0731 19:46:25.217988  139843 certs.go:385] copying /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key.82950da9 -> /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key
	I0731 19:46:25.218084  139843 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.key
	I0731 19:46:25.218104  139843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.crt with IP's: []
	I0731 19:46:25.489830  139843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.crt ...
	I0731 19:46:25.489864  139843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.crt: {Name:mkcd956d75512ed26c96feee86155abe04d06817 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:46:25.490048  139843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.key ...
	I0731 19:46:25.490062  139843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.key: {Name:mk54989aa19e6971e17508247521aa4df1689b4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:46:25.490161  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 19:46:25.490183  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 19:46:25.490200  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 19:46:25.490217  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 19:46:25.490236  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 19:46:25.490255  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 19:46:25.490273  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 19:46:25.490293  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 19:46:25.490351  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 19:46:25.490401  139843 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 19:46:25.490414  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 19:46:25.490447  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 19:46:25.490510  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 19:46:25.490555  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 19:46:25.490614  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 19:46:25.490655  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:46:25.490675  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem -> /usr/share/ca-certificates/128891.pem
	I0731 19:46:25.490696  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> /usr/share/ca-certificates/1288912.pem
	I0731 19:46:25.491277  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 19:46:25.516812  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 19:46:25.539578  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 19:46:25.561873  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 19:46:25.584064  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 19:46:25.606376  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 19:46:25.628567  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 19:46:25.651078  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 19:46:25.673604  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 19:46:25.696165  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 19:46:25.720811  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 19:46:25.744829  139843 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 19:46:25.778933  139843 ssh_runner.go:195] Run: openssl version
	I0731 19:46:25.786952  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 19:46:25.799457  139843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 19:46:25.804084  139843 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 19:46:25.804144  139843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 19:46:25.809832  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 19:46:25.820347  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 19:46:25.830602  139843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 19:46:25.834777  139843 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 19:46:25.834809  139843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 19:46:25.840082  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 19:46:25.850491  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 19:46:25.860978  139843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:46:25.865620  139843 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:46:25.865679  139843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:46:25.871289  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 19:46:25.881977  139843 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 19:46:25.886017  139843 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 19:46:25.886065  139843 kubeadm.go:392] StartCluster: {Name:ha-235073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-235073 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:46:25.886155  139843 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 19:46:25.886201  139843 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 19:46:25.925417  139843 cri.go:89] found id: ""
	I0731 19:46:25.925490  139843 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 19:46:25.936024  139843 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 19:46:25.950779  139843 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 19:46:25.962133  139843 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 19:46:25.962150  139843 kubeadm.go:157] found existing configuration files:
	
	I0731 19:46:25.962207  139843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 19:46:25.972276  139843 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 19:46:25.972336  139843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 19:46:25.982555  139843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 19:46:25.992306  139843 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 19:46:25.992399  139843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 19:46:26.002240  139843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 19:46:26.011658  139843 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 19:46:26.011721  139843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 19:46:26.021447  139843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 19:46:26.030629  139843 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 19:46:26.030685  139843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 19:46:26.040128  139843 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 19:46:26.295845  139843 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 19:46:38.276896  139843 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 19:46:38.276953  139843 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 19:46:38.277036  139843 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 19:46:38.277141  139843 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 19:46:38.277283  139843 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 19:46:38.277369  139843 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 19:46:38.279111  139843 out.go:204]   - Generating certificates and keys ...
	I0731 19:46:38.279204  139843 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 19:46:38.279296  139843 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 19:46:38.279407  139843 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 19:46:38.279472  139843 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 19:46:38.279528  139843 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 19:46:38.279589  139843 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 19:46:38.279637  139843 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 19:46:38.279758  139843 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-235073 localhost] and IPs [192.168.39.146 127.0.0.1 ::1]
	I0731 19:46:38.279811  139843 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 19:46:38.279981  139843 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-235073 localhost] and IPs [192.168.39.146 127.0.0.1 ::1]
	I0731 19:46:38.280066  139843 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 19:46:38.280154  139843 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 19:46:38.280211  139843 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 19:46:38.280290  139843 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 19:46:38.280364  139843 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 19:46:38.280430  139843 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 19:46:38.280502  139843 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 19:46:38.280584  139843 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 19:46:38.280657  139843 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 19:46:38.280770  139843 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 19:46:38.280857  139843 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 19:46:38.282428  139843 out.go:204]   - Booting up control plane ...
	I0731 19:46:38.282505  139843 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 19:46:38.282600  139843 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 19:46:38.282691  139843 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 19:46:38.282817  139843 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 19:46:38.282941  139843 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 19:46:38.282994  139843 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 19:46:38.283119  139843 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 19:46:38.283182  139843 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 19:46:38.283237  139843 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.374653ms
	I0731 19:46:38.283296  139843 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 19:46:38.283361  139843 kubeadm.go:310] [api-check] The API server is healthy after 6.112498134s
	I0731 19:46:38.283496  139843 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 19:46:38.283660  139843 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 19:46:38.283716  139843 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 19:46:38.283860  139843 kubeadm.go:310] [mark-control-plane] Marking the node ha-235073 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 19:46:38.283913  139843 kubeadm.go:310] [bootstrap-token] Using token: 6dy6ds.nufllor3coa5iqmk
	I0731 19:46:38.285367  139843 out.go:204]   - Configuring RBAC rules ...
	I0731 19:46:38.285458  139843 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 19:46:38.285541  139843 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 19:46:38.285660  139843 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 19:46:38.285760  139843 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 19:46:38.285849  139843 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 19:46:38.285923  139843 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 19:46:38.286037  139843 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 19:46:38.286099  139843 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 19:46:38.286138  139843 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 19:46:38.286144  139843 kubeadm.go:310] 
	I0731 19:46:38.286214  139843 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 19:46:38.286229  139843 kubeadm.go:310] 
	I0731 19:46:38.286291  139843 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 19:46:38.286297  139843 kubeadm.go:310] 
	I0731 19:46:38.286336  139843 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 19:46:38.286394  139843 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 19:46:38.286477  139843 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 19:46:38.286489  139843 kubeadm.go:310] 
	I0731 19:46:38.286540  139843 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 19:46:38.286550  139843 kubeadm.go:310] 
	I0731 19:46:38.286588  139843 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 19:46:38.286594  139843 kubeadm.go:310] 
	I0731 19:46:38.286647  139843 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 19:46:38.286732  139843 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 19:46:38.286794  139843 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 19:46:38.286799  139843 kubeadm.go:310] 
	I0731 19:46:38.286883  139843 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 19:46:38.286962  139843 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 19:46:38.286971  139843 kubeadm.go:310] 
	I0731 19:46:38.287067  139843 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6dy6ds.nufllor3coa5iqmk \
	I0731 19:46:38.287207  139843 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:227a67c5218b331dd5f7132ea28d97c9a8049ca33dc9adbb2077964e102f3559 \
	I0731 19:46:38.287229  139843 kubeadm.go:310] 	--control-plane 
	I0731 19:46:38.287233  139843 kubeadm.go:310] 
	I0731 19:46:38.287303  139843 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 19:46:38.287309  139843 kubeadm.go:310] 
	I0731 19:46:38.287378  139843 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6dy6ds.nufllor3coa5iqmk \
	I0731 19:46:38.287471  139843 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:227a67c5218b331dd5f7132ea28d97c9a8049ca33dc9adbb2077964e102f3559 
	I0731 19:46:38.287486  139843 cni.go:84] Creating CNI manager for ""
	I0731 19:46:38.287494  139843 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 19:46:38.289660  139843 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0731 19:46:38.290982  139843 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0731 19:46:38.296573  139843 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0731 19:46:38.296589  139843 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0731 19:46:38.314588  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0731 19:46:38.674906  139843 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 19:46:38.675011  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:38.675011  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-235073 minikube.k8s.io/updated_at=2024_07_31T19_46_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825 minikube.k8s.io/name=ha-235073 minikube.k8s.io/primary=true
	I0731 19:46:38.689669  139843 ops.go:34] apiserver oom_adj: -16
	I0731 19:46:38.806046  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:39.306767  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:39.806688  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:40.306926  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:40.807017  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:41.306141  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:41.807048  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:42.306188  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:42.806087  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:43.306530  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:43.807036  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:44.306293  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:44.806543  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:45.306436  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:45.806665  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:46.306320  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:46.806299  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:47.306983  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:47.806158  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:48.306491  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:48.806404  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:49.306555  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:49.806376  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:50.306188  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:50.806567  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:50.920724  139843 kubeadm.go:1113] duration metric: took 12.245789791s to wait for elevateKubeSystemPrivileges
	I0731 19:46:50.920770  139843 kubeadm.go:394] duration metric: took 25.034709102s to StartCluster
	I0731 19:46:50.920795  139843 settings.go:142] acquiring lock: {Name:mk1f43113b3e416d694056e4890ac819b020378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:46:50.920881  139843 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 19:46:50.922029  139843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:46:50.922387  139843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 19:46:50.922406  139843 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 19:46:50.922438  139843 start.go:241] waiting for startup goroutines ...
	I0731 19:46:50.922468  139843 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 19:46:50.922552  139843 addons.go:69] Setting storage-provisioner=true in profile "ha-235073"
	I0731 19:46:50.922565  139843 addons.go:69] Setting default-storageclass=true in profile "ha-235073"
	I0731 19:46:50.922659  139843 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-235073"
	I0731 19:46:50.922669  139843 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:46:50.922585  139843 addons.go:234] Setting addon storage-provisioner=true in "ha-235073"
	I0731 19:46:50.922729  139843 host.go:66] Checking if "ha-235073" exists ...
	I0731 19:46:50.923203  139843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:46:50.923210  139843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:46:50.923266  139843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:46:50.923365  139843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:46:50.938471  139843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34155
	I0731 19:46:50.938471  139843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33841
	I0731 19:46:50.939025  139843 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:46:50.939054  139843 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:46:50.939700  139843 main.go:141] libmachine: Using API Version  1
	I0731 19:46:50.939718  139843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:46:50.939774  139843 main.go:141] libmachine: Using API Version  1
	I0731 19:46:50.939793  139843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:46:50.940072  139843 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:46:50.940153  139843 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:46:50.940273  139843 main.go:141] libmachine: (ha-235073) Calling .GetState
	I0731 19:46:50.940878  139843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:46:50.940926  139843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:46:50.942744  139843 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 19:46:50.943032  139843 kapi.go:59] client config for ha-235073: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/client.crt", KeyFile:"/home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/client.key", CAFile:"/home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 19:46:50.943550  139843 cert_rotation.go:137] Starting client certificate rotation controller
	I0731 19:46:50.943726  139843 addons.go:234] Setting addon default-storageclass=true in "ha-235073"
	I0731 19:46:50.943759  139843 host.go:66] Checking if "ha-235073" exists ...
	I0731 19:46:50.944078  139843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:46:50.944123  139843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:46:50.955948  139843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36181
	I0731 19:46:50.956382  139843 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:46:50.956863  139843 main.go:141] libmachine: Using API Version  1
	I0731 19:46:50.956888  139843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:46:50.957205  139843 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:46:50.957399  139843 main.go:141] libmachine: (ha-235073) Calling .GetState
	I0731 19:46:50.959014  139843 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:46:50.961123  139843 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 19:46:50.962572  139843 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 19:46:50.962594  139843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 19:46:50.962614  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:46:50.963249  139843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41241
	I0731 19:46:50.963768  139843 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:46:50.964334  139843 main.go:141] libmachine: Using API Version  1
	I0731 19:46:50.964357  139843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:46:50.964712  139843 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:46:50.965249  139843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:46:50.965320  139843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:46:50.965867  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:50.966298  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:50.966325  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:50.966605  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:46:50.966787  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:50.966928  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:46:50.967069  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:46:50.980464  139843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35245
	I0731 19:46:50.980863  139843 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:46:50.981352  139843 main.go:141] libmachine: Using API Version  1
	I0731 19:46:50.981377  139843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:46:50.981670  139843 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:46:50.981847  139843 main.go:141] libmachine: (ha-235073) Calling .GetState
	I0731 19:46:50.983303  139843 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:46:50.983488  139843 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 19:46:50.983503  139843 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 19:46:50.983517  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:46:50.986171  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:50.986605  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:50.986631  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:50.986786  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:46:50.986944  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:50.987093  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:46:50.987248  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:46:51.033166  139843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 19:46:51.128226  139843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 19:46:51.147741  139843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 19:46:51.516879  139843 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0731 19:46:51.794650  139843 main.go:141] libmachine: Making call to close driver server
	I0731 19:46:51.794689  139843 main.go:141] libmachine: (ha-235073) Calling .Close
	I0731 19:46:51.794664  139843 main.go:141] libmachine: Making call to close driver server
	I0731 19:46:51.794748  139843 main.go:141] libmachine: (ha-235073) Calling .Close
	I0731 19:46:51.795071  139843 main.go:141] libmachine: (ha-235073) DBG | Closing plugin on server side
	I0731 19:46:51.795079  139843 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:46:51.795097  139843 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:46:51.795106  139843 main.go:141] libmachine: Making call to close driver server
	I0731 19:46:51.795113  139843 main.go:141] libmachine: (ha-235073) Calling .Close
	I0731 19:46:51.795120  139843 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:46:51.795134  139843 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:46:51.795071  139843 main.go:141] libmachine: (ha-235073) DBG | Closing plugin on server side
	I0731 19:46:51.795147  139843 main.go:141] libmachine: Making call to close driver server
	I0731 19:46:51.795254  139843 main.go:141] libmachine: (ha-235073) Calling .Close
	I0731 19:46:51.795327  139843 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:46:51.795338  139843 main.go:141] libmachine: (ha-235073) DBG | Closing plugin on server side
	I0731 19:46:51.795342  139843 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:46:51.795525  139843 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:46:51.795538  139843 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:46:51.795557  139843 main.go:141] libmachine: (ha-235073) DBG | Closing plugin on server side
	I0731 19:46:51.795721  139843 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0731 19:46:51.795735  139843 round_trippers.go:469] Request Headers:
	I0731 19:46:51.795746  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:46:51.795753  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:46:51.815665  139843 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0731 19:46:51.817344  139843 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0731 19:46:51.817362  139843 round_trippers.go:469] Request Headers:
	I0731 19:46:51.817374  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:46:51.817379  139843 round_trippers.go:473]     Content-Type: application/json
	I0731 19:46:51.817384  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:46:51.821962  139843 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 19:46:51.822144  139843 main.go:141] libmachine: Making call to close driver server
	I0731 19:46:51.822162  139843 main.go:141] libmachine: (ha-235073) Calling .Close
	I0731 19:46:51.822438  139843 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:46:51.822455  139843 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:46:51.824297  139843 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0731 19:46:51.825435  139843 addons.go:510] duration metric: took 902.97956ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0731 19:46:51.825470  139843 start.go:246] waiting for cluster config update ...
	I0731 19:46:51.825485  139843 start.go:255] writing updated cluster config ...
	I0731 19:46:51.827099  139843 out.go:177] 
	I0731 19:46:51.828566  139843 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:46:51.828645  139843 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/config.json ...
	I0731 19:46:51.830552  139843 out.go:177] * Starting "ha-235073-m02" control-plane node in "ha-235073" cluster
	I0731 19:46:51.831795  139843 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 19:46:51.831816  139843 cache.go:56] Caching tarball of preloaded images
	I0731 19:46:51.831925  139843 preload.go:172] Found /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 19:46:51.831939  139843 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 19:46:51.831999  139843 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/config.json ...
	I0731 19:46:51.832161  139843 start.go:360] acquireMachinesLock for ha-235073-m02: {Name:mk0ee20c9dba367bb5e62f6affdfe6f589095d2a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 19:46:51.832202  139843 start.go:364] duration metric: took 23.256µs to acquireMachinesLock for "ha-235073-m02"
	I0731 19:46:51.832218  139843 start.go:93] Provisioning new machine with config: &{Name:ha-235073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-235073 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 19:46:51.832287  139843 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0731 19:46:51.833957  139843 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 19:46:51.834035  139843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:46:51.834067  139843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:46:51.848458  139843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35869
	I0731 19:46:51.848928  139843 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:46:51.849449  139843 main.go:141] libmachine: Using API Version  1
	I0731 19:46:51.849472  139843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:46:51.849765  139843 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:46:51.849939  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetMachineName
	I0731 19:46:51.850082  139843 main.go:141] libmachine: (ha-235073-m02) Calling .DriverName
	I0731 19:46:51.850273  139843 start.go:159] libmachine.API.Create for "ha-235073" (driver="kvm2")
	I0731 19:46:51.850301  139843 client.go:168] LocalClient.Create starting
	I0731 19:46:51.850334  139843 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem
	I0731 19:46:51.850371  139843 main.go:141] libmachine: Decoding PEM data...
	I0731 19:46:51.850388  139843 main.go:141] libmachine: Parsing certificate...
	I0731 19:46:51.850462  139843 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem
	I0731 19:46:51.850486  139843 main.go:141] libmachine: Decoding PEM data...
	I0731 19:46:51.850502  139843 main.go:141] libmachine: Parsing certificate...
	I0731 19:46:51.850525  139843 main.go:141] libmachine: Running pre-create checks...
	I0731 19:46:51.850536  139843 main.go:141] libmachine: (ha-235073-m02) Calling .PreCreateCheck
	I0731 19:46:51.850681  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetConfigRaw
	I0731 19:46:51.851052  139843 main.go:141] libmachine: Creating machine...
	I0731 19:46:51.851064  139843 main.go:141] libmachine: (ha-235073-m02) Calling .Create
	I0731 19:46:51.851175  139843 main.go:141] libmachine: (ha-235073-m02) Creating KVM machine...
	I0731 19:46:51.852291  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found existing default KVM network
	I0731 19:46:51.852447  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found existing private KVM network mk-ha-235073
	I0731 19:46:51.852595  139843 main.go:141] libmachine: (ha-235073-m02) Setting up store path in /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02 ...
	I0731 19:46:51.852617  139843 main.go:141] libmachine: (ha-235073-m02) Building disk image from file:///home/jenkins/minikube-integration/19355-121704/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0731 19:46:51.852718  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:46:51.852582  140223 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 19:46:51.852806  139843 main.go:141] libmachine: (ha-235073-m02) Downloading /home/jenkins/minikube-integration/19355-121704/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19355-121704/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0731 19:46:52.129760  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:46:52.129625  140223 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02/id_rsa...
	I0731 19:46:52.220476  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:46:52.220360  140223 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02/ha-235073-m02.rawdisk...
	I0731 19:46:52.220506  139843 main.go:141] libmachine: (ha-235073-m02) DBG | Writing magic tar header
	I0731 19:46:52.220518  139843 main.go:141] libmachine: (ha-235073-m02) DBG | Writing SSH key tar header
	I0731 19:46:52.220533  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:46:52.220501  140223 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02 ...
	I0731 19:46:52.220673  139843 main.go:141] libmachine: (ha-235073-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02
	I0731 19:46:52.220711  139843 main.go:141] libmachine: (ha-235073-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704/.minikube/machines
	I0731 19:46:52.220727  139843 main.go:141] libmachine: (ha-235073-m02) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02 (perms=drwx------)
	I0731 19:46:52.220747  139843 main.go:141] libmachine: (ha-235073-m02) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704/.minikube/machines (perms=drwxr-xr-x)
	I0731 19:46:52.220759  139843 main.go:141] libmachine: (ha-235073-m02) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704/.minikube (perms=drwxr-xr-x)
	I0731 19:46:52.220771  139843 main.go:141] libmachine: (ha-235073-m02) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704 (perms=drwxrwxr-x)
	I0731 19:46:52.220784  139843 main.go:141] libmachine: (ha-235073-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 19:46:52.220794  139843 main.go:141] libmachine: (ha-235073-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 19:46:52.220810  139843 main.go:141] libmachine: (ha-235073-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 19:46:52.220821  139843 main.go:141] libmachine: (ha-235073-m02) Creating domain...
	I0731 19:46:52.220833  139843 main.go:141] libmachine: (ha-235073-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704
	I0731 19:46:52.220844  139843 main.go:141] libmachine: (ha-235073-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 19:46:52.220853  139843 main.go:141] libmachine: (ha-235073-m02) DBG | Checking permissions on dir: /home/jenkins
	I0731 19:46:52.220864  139843 main.go:141] libmachine: (ha-235073-m02) DBG | Checking permissions on dir: /home
	I0731 19:46:52.220874  139843 main.go:141] libmachine: (ha-235073-m02) DBG | Skipping /home - not owner
	I0731 19:46:52.221756  139843 main.go:141] libmachine: (ha-235073-m02) define libvirt domain using xml: 
	I0731 19:46:52.221778  139843 main.go:141] libmachine: (ha-235073-m02) <domain type='kvm'>
	I0731 19:46:52.221789  139843 main.go:141] libmachine: (ha-235073-m02)   <name>ha-235073-m02</name>
	I0731 19:46:52.221796  139843 main.go:141] libmachine: (ha-235073-m02)   <memory unit='MiB'>2200</memory>
	I0731 19:46:52.221807  139843 main.go:141] libmachine: (ha-235073-m02)   <vcpu>2</vcpu>
	I0731 19:46:52.221813  139843 main.go:141] libmachine: (ha-235073-m02)   <features>
	I0731 19:46:52.221823  139843 main.go:141] libmachine: (ha-235073-m02)     <acpi/>
	I0731 19:46:52.221829  139843 main.go:141] libmachine: (ha-235073-m02)     <apic/>
	I0731 19:46:52.221840  139843 main.go:141] libmachine: (ha-235073-m02)     <pae/>
	I0731 19:46:52.221850  139843 main.go:141] libmachine: (ha-235073-m02)     
	I0731 19:46:52.221870  139843 main.go:141] libmachine: (ha-235073-m02)   </features>
	I0731 19:46:52.221887  139843 main.go:141] libmachine: (ha-235073-m02)   <cpu mode='host-passthrough'>
	I0731 19:46:52.221896  139843 main.go:141] libmachine: (ha-235073-m02)   
	I0731 19:46:52.221901  139843 main.go:141] libmachine: (ha-235073-m02)   </cpu>
	I0731 19:46:52.221912  139843 main.go:141] libmachine: (ha-235073-m02)   <os>
	I0731 19:46:52.221922  139843 main.go:141] libmachine: (ha-235073-m02)     <type>hvm</type>
	I0731 19:46:52.221931  139843 main.go:141] libmachine: (ha-235073-m02)     <boot dev='cdrom'/>
	I0731 19:46:52.221940  139843 main.go:141] libmachine: (ha-235073-m02)     <boot dev='hd'/>
	I0731 19:46:52.221952  139843 main.go:141] libmachine: (ha-235073-m02)     <bootmenu enable='no'/>
	I0731 19:46:52.221968  139843 main.go:141] libmachine: (ha-235073-m02)   </os>
	I0731 19:46:52.222014  139843 main.go:141] libmachine: (ha-235073-m02)   <devices>
	I0731 19:46:52.222038  139843 main.go:141] libmachine: (ha-235073-m02)     <disk type='file' device='cdrom'>
	I0731 19:46:52.222051  139843 main.go:141] libmachine: (ha-235073-m02)       <source file='/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02/boot2docker.iso'/>
	I0731 19:46:52.222061  139843 main.go:141] libmachine: (ha-235073-m02)       <target dev='hdc' bus='scsi'/>
	I0731 19:46:52.222071  139843 main.go:141] libmachine: (ha-235073-m02)       <readonly/>
	I0731 19:46:52.222082  139843 main.go:141] libmachine: (ha-235073-m02)     </disk>
	I0731 19:46:52.222094  139843 main.go:141] libmachine: (ha-235073-m02)     <disk type='file' device='disk'>
	I0731 19:46:52.222106  139843 main.go:141] libmachine: (ha-235073-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 19:46:52.222142  139843 main.go:141] libmachine: (ha-235073-m02)       <source file='/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02/ha-235073-m02.rawdisk'/>
	I0731 19:46:52.222167  139843 main.go:141] libmachine: (ha-235073-m02)       <target dev='hda' bus='virtio'/>
	I0731 19:46:52.222179  139843 main.go:141] libmachine: (ha-235073-m02)     </disk>
	I0731 19:46:52.222192  139843 main.go:141] libmachine: (ha-235073-m02)     <interface type='network'>
	I0731 19:46:52.222206  139843 main.go:141] libmachine: (ha-235073-m02)       <source network='mk-ha-235073'/>
	I0731 19:46:52.222218  139843 main.go:141] libmachine: (ha-235073-m02)       <model type='virtio'/>
	I0731 19:46:52.222230  139843 main.go:141] libmachine: (ha-235073-m02)     </interface>
	I0731 19:46:52.222241  139843 main.go:141] libmachine: (ha-235073-m02)     <interface type='network'>
	I0731 19:46:52.222252  139843 main.go:141] libmachine: (ha-235073-m02)       <source network='default'/>
	I0731 19:46:52.222263  139843 main.go:141] libmachine: (ha-235073-m02)       <model type='virtio'/>
	I0731 19:46:52.222282  139843 main.go:141] libmachine: (ha-235073-m02)     </interface>
	I0731 19:46:52.222301  139843 main.go:141] libmachine: (ha-235073-m02)     <serial type='pty'>
	I0731 19:46:52.222314  139843 main.go:141] libmachine: (ha-235073-m02)       <target port='0'/>
	I0731 19:46:52.222324  139843 main.go:141] libmachine: (ha-235073-m02)     </serial>
	I0731 19:46:52.222336  139843 main.go:141] libmachine: (ha-235073-m02)     <console type='pty'>
	I0731 19:46:52.222348  139843 main.go:141] libmachine: (ha-235073-m02)       <target type='serial' port='0'/>
	I0731 19:46:52.222376  139843 main.go:141] libmachine: (ha-235073-m02)     </console>
	I0731 19:46:52.222394  139843 main.go:141] libmachine: (ha-235073-m02)     <rng model='virtio'>
	I0731 19:46:52.222411  139843 main.go:141] libmachine: (ha-235073-m02)       <backend model='random'>/dev/random</backend>
	I0731 19:46:52.222431  139843 main.go:141] libmachine: (ha-235073-m02)     </rng>
	I0731 19:46:52.222444  139843 main.go:141] libmachine: (ha-235073-m02)     
	I0731 19:46:52.222454  139843 main.go:141] libmachine: (ha-235073-m02)     
	I0731 19:46:52.222466  139843 main.go:141] libmachine: (ha-235073-m02)   </devices>
	I0731 19:46:52.222477  139843 main.go:141] libmachine: (ha-235073-m02) </domain>
	I0731 19:46:52.222488  139843 main.go:141] libmachine: (ha-235073-m02) 
	I0731 19:46:52.229071  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:76:68:c2 in network default
	I0731 19:46:52.229666  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:46:52.229684  139843 main.go:141] libmachine: (ha-235073-m02) Ensuring networks are active...
	I0731 19:46:52.230358  139843 main.go:141] libmachine: (ha-235073-m02) Ensuring network default is active
	I0731 19:46:52.230702  139843 main.go:141] libmachine: (ha-235073-m02) Ensuring network mk-ha-235073 is active
	I0731 19:46:52.231096  139843 main.go:141] libmachine: (ha-235073-m02) Getting domain xml...
	I0731 19:46:52.231766  139843 main.go:141] libmachine: (ha-235073-m02) Creating domain...
	I0731 19:46:53.412056  139843 main.go:141] libmachine: (ha-235073-m02) Waiting to get IP...
	I0731 19:46:53.412873  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:46:53.413194  139843 main.go:141] libmachine: (ha-235073-m02) DBG | unable to find current IP address of domain ha-235073-m02 in network mk-ha-235073
	I0731 19:46:53.413212  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:46:53.413189  140223 retry.go:31] will retry after 312.469495ms: waiting for machine to come up
	I0731 19:46:53.727499  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:46:53.727975  139843 main.go:141] libmachine: (ha-235073-m02) DBG | unable to find current IP address of domain ha-235073-m02 in network mk-ha-235073
	I0731 19:46:53.728008  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:46:53.727940  140223 retry.go:31] will retry after 369.713539ms: waiting for machine to come up
	I0731 19:46:54.099438  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:46:54.099870  139843 main.go:141] libmachine: (ha-235073-m02) DBG | unable to find current IP address of domain ha-235073-m02 in network mk-ha-235073
	I0731 19:46:54.099899  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:46:54.099836  140223 retry.go:31] will retry after 359.388499ms: waiting for machine to come up
	I0731 19:46:54.461310  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:46:54.461862  139843 main.go:141] libmachine: (ha-235073-m02) DBG | unable to find current IP address of domain ha-235073-m02 in network mk-ha-235073
	I0731 19:46:54.461892  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:46:54.461817  140223 retry.go:31] will retry after 581.689874ms: waiting for machine to come up
	I0731 19:46:55.045760  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:46:55.046207  139843 main.go:141] libmachine: (ha-235073-m02) DBG | unable to find current IP address of domain ha-235073-m02 in network mk-ha-235073
	I0731 19:46:55.046235  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:46:55.046171  140223 retry.go:31] will retry after 622.054876ms: waiting for machine to come up
	I0731 19:46:55.670059  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:46:55.670452  139843 main.go:141] libmachine: (ha-235073-m02) DBG | unable to find current IP address of domain ha-235073-m02 in network mk-ha-235073
	I0731 19:46:55.670479  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:46:55.670410  140223 retry.go:31] will retry after 810.839747ms: waiting for machine to come up
	I0731 19:46:56.482516  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:46:56.482857  139843 main.go:141] libmachine: (ha-235073-m02) DBG | unable to find current IP address of domain ha-235073-m02 in network mk-ha-235073
	I0731 19:46:56.482883  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:46:56.482825  140223 retry.go:31] will retry after 1.105583581s: waiting for machine to come up
	I0731 19:46:57.590408  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:46:57.590800  139843 main.go:141] libmachine: (ha-235073-m02) DBG | unable to find current IP address of domain ha-235073-m02 in network mk-ha-235073
	I0731 19:46:57.590830  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:46:57.590749  140223 retry.go:31] will retry after 1.461697958s: waiting for machine to come up
	I0731 19:46:59.054527  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:46:59.054908  139843 main.go:141] libmachine: (ha-235073-m02) DBG | unable to find current IP address of domain ha-235073-m02 in network mk-ha-235073
	I0731 19:46:59.054937  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:46:59.054861  140223 retry.go:31] will retry after 1.153075906s: waiting for machine to come up
	I0731 19:47:00.209551  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:00.210027  139843 main.go:141] libmachine: (ha-235073-m02) DBG | unable to find current IP address of domain ha-235073-m02 in network mk-ha-235073
	I0731 19:47:00.210057  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:47:00.209979  140223 retry.go:31] will retry after 1.436509555s: waiting for machine to come up
	I0731 19:47:01.648504  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:01.649027  139843 main.go:141] libmachine: (ha-235073-m02) DBG | unable to find current IP address of domain ha-235073-m02 in network mk-ha-235073
	I0731 19:47:01.649055  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:47:01.648965  140223 retry.go:31] will retry after 1.954522866s: waiting for machine to come up
	I0731 19:47:03.605798  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:03.606255  139843 main.go:141] libmachine: (ha-235073-m02) DBG | unable to find current IP address of domain ha-235073-m02 in network mk-ha-235073
	I0731 19:47:03.606278  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:47:03.606211  140223 retry.go:31] will retry after 2.813375548s: waiting for machine to come up
	I0731 19:47:06.422537  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:06.422994  139843 main.go:141] libmachine: (ha-235073-m02) DBG | unable to find current IP address of domain ha-235073-m02 in network mk-ha-235073
	I0731 19:47:06.423023  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:47:06.422955  140223 retry.go:31] will retry after 3.497609634s: waiting for machine to come up
	I0731 19:47:09.924629  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:09.925033  139843 main.go:141] libmachine: (ha-235073-m02) DBG | unable to find current IP address of domain ha-235073-m02 in network mk-ha-235073
	I0731 19:47:09.925058  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:47:09.924981  140223 retry.go:31] will retry after 4.532256157s: waiting for machine to come up
	I0731 19:47:14.460269  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:14.460705  139843 main.go:141] libmachine: (ha-235073-m02) Found IP for machine: 192.168.39.102
	I0731 19:47:14.460741  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has current primary IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:14.460753  139843 main.go:141] libmachine: (ha-235073-m02) Reserving static IP address...
	I0731 19:47:14.461076  139843 main.go:141] libmachine: (ha-235073-m02) DBG | unable to find host DHCP lease matching {name: "ha-235073-m02", mac: "52:54:00:41:fe:7b", ip: "192.168.39.102"} in network mk-ha-235073
	I0731 19:47:14.531501  139843 main.go:141] libmachine: (ha-235073-m02) DBG | Getting to WaitForSSH function...
	I0731 19:47:14.531532  139843 main.go:141] libmachine: (ha-235073-m02) Reserved static IP address: 192.168.39.102
	I0731 19:47:14.531546  139843 main.go:141] libmachine: (ha-235073-m02) Waiting for SSH to be available...
	I0731 19:47:14.534237  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:14.534668  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:minikube Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:14.534697  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:14.534857  139843 main.go:141] libmachine: (ha-235073-m02) DBG | Using SSH client type: external
	I0731 19:47:14.534869  139843 main.go:141] libmachine: (ha-235073-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02/id_rsa (-rw-------)
	I0731 19:47:14.535498  139843 main.go:141] libmachine: (ha-235073-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 19:47:14.535522  139843 main.go:141] libmachine: (ha-235073-m02) DBG | About to run SSH command:
	I0731 19:47:14.535536  139843 main.go:141] libmachine: (ha-235073-m02) DBG | exit 0
	I0731 19:47:14.661583  139843 main.go:141] libmachine: (ha-235073-m02) DBG | SSH cmd err, output: <nil>: 
	I0731 19:47:14.661865  139843 main.go:141] libmachine: (ha-235073-m02) KVM machine creation complete!
	I0731 19:47:14.662138  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetConfigRaw
	I0731 19:47:14.662744  139843 main.go:141] libmachine: (ha-235073-m02) Calling .DriverName
	I0731 19:47:14.662949  139843 main.go:141] libmachine: (ha-235073-m02) Calling .DriverName
	I0731 19:47:14.663161  139843 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 19:47:14.663190  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetState
	I0731 19:47:14.664499  139843 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 19:47:14.664514  139843 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 19:47:14.664535  139843 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 19:47:14.664544  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHHostname
	I0731 19:47:14.666950  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:14.667276  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:14.667315  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:14.667448  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHPort
	I0731 19:47:14.667637  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:47:14.667803  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:47:14.667930  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHUsername
	I0731 19:47:14.668058  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:47:14.668297  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0731 19:47:14.668314  139843 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 19:47:14.776780  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 19:47:14.776808  139843 main.go:141] libmachine: Detecting the provisioner...
	I0731 19:47:14.776818  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHHostname
	I0731 19:47:14.779592  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:14.779963  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:14.779992  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:14.780187  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHPort
	I0731 19:47:14.780399  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:47:14.780571  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:47:14.780705  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHUsername
	I0731 19:47:14.780864  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:47:14.781026  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0731 19:47:14.781039  139843 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 19:47:14.891250  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 19:47:14.891346  139843 main.go:141] libmachine: found compatible host: buildroot
	I0731 19:47:14.891363  139843 main.go:141] libmachine: Provisioning with buildroot...
	I0731 19:47:14.891377  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetMachineName
	I0731 19:47:14.891705  139843 buildroot.go:166] provisioning hostname "ha-235073-m02"
	I0731 19:47:14.891737  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetMachineName
	I0731 19:47:14.891942  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHHostname
	I0731 19:47:14.894788  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:14.895144  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:14.895178  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:14.895262  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHPort
	I0731 19:47:14.895458  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:47:14.895621  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:47:14.895817  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHUsername
	I0731 19:47:14.896067  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:47:14.896257  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0731 19:47:14.896273  139843 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-235073-m02 && echo "ha-235073-m02" | sudo tee /etc/hostname
	I0731 19:47:15.019121  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-235073-m02
	
	I0731 19:47:15.019146  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHHostname
	I0731 19:47:15.021653  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.021965  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:15.021984  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.022159  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHPort
	I0731 19:47:15.022340  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:47:15.022534  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:47:15.022690  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHUsername
	I0731 19:47:15.022874  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:47:15.023044  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0731 19:47:15.023059  139843 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-235073-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-235073-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-235073-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 19:47:15.142986  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 19:47:15.143022  139843 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 19:47:15.143040  139843 buildroot.go:174] setting up certificates
	I0731 19:47:15.143048  139843 provision.go:84] configureAuth start
	I0731 19:47:15.143057  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetMachineName
	I0731 19:47:15.143354  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetIP
	I0731 19:47:15.145989  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.146324  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:15.146355  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.146548  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHHostname
	I0731 19:47:15.148348  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.148760  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:15.148788  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.148854  139843 provision.go:143] copyHostCerts
	I0731 19:47:15.148891  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 19:47:15.148926  139843 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 19:47:15.148936  139843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 19:47:15.149023  139843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 19:47:15.149130  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 19:47:15.149159  139843 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 19:47:15.149166  139843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 19:47:15.149195  139843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 19:47:15.149246  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 19:47:15.149263  139843 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 19:47:15.149267  139843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 19:47:15.149288  139843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 19:47:15.149359  139843 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.ha-235073-m02 san=[127.0.0.1 192.168.39.102 ha-235073-m02 localhost minikube]
	I0731 19:47:15.254916  139843 provision.go:177] copyRemoteCerts
	I0731 19:47:15.254975  139843 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 19:47:15.255001  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHHostname
	I0731 19:47:15.257781  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.258110  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:15.258130  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.258329  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHPort
	I0731 19:47:15.258509  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:47:15.258634  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHUsername
	I0731 19:47:15.258743  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02/id_rsa Username:docker}
	I0731 19:47:15.343735  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 19:47:15.343811  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 19:47:15.368324  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 19:47:15.368461  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 19:47:15.391622  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 19:47:15.391688  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 19:47:15.414702  139843 provision.go:87] duration metric: took 271.638616ms to configureAuth
	I0731 19:47:15.414740  139843 buildroot.go:189] setting minikube options for container-runtime
	I0731 19:47:15.414917  139843 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:47:15.414997  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHHostname
	I0731 19:47:15.417430  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.417806  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:15.417835  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.417991  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHPort
	I0731 19:47:15.418205  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:47:15.418372  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:47:15.418526  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHUsername
	I0731 19:47:15.418656  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:47:15.418833  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0731 19:47:15.418853  139843 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 19:47:15.691749  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 19:47:15.691774  139843 main.go:141] libmachine: Checking connection to Docker...
	I0731 19:47:15.691781  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetURL
	I0731 19:47:15.693140  139843 main.go:141] libmachine: (ha-235073-m02) DBG | Using libvirt version 6000000
	I0731 19:47:15.695171  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.695499  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:15.695529  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.695742  139843 main.go:141] libmachine: Docker is up and running!
	I0731 19:47:15.695758  139843 main.go:141] libmachine: Reticulating splines...
	I0731 19:47:15.695768  139843 client.go:171] duration metric: took 23.845457271s to LocalClient.Create
	I0731 19:47:15.695796  139843 start.go:167] duration metric: took 23.845522725s to libmachine.API.Create "ha-235073"
	I0731 19:47:15.695808  139843 start.go:293] postStartSetup for "ha-235073-m02" (driver="kvm2")
	I0731 19:47:15.695822  139843 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 19:47:15.695847  139843 main.go:141] libmachine: (ha-235073-m02) Calling .DriverName
	I0731 19:47:15.696128  139843 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 19:47:15.696154  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHHostname
	I0731 19:47:15.698342  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.698651  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:15.698677  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.698830  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHPort
	I0731 19:47:15.699045  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:47:15.699174  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHUsername
	I0731 19:47:15.699309  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02/id_rsa Username:docker}
	I0731 19:47:15.785084  139843 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 19:47:15.789227  139843 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 19:47:15.789247  139843 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 19:47:15.789296  139843 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 19:47:15.789399  139843 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 19:47:15.789410  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> /etc/ssl/certs/1288912.pem
	I0731 19:47:15.789493  139843 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 19:47:15.799406  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 19:47:15.823179  139843 start.go:296] duration metric: took 127.354941ms for postStartSetup
	I0731 19:47:15.823231  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetConfigRaw
	I0731 19:47:15.823808  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetIP
	I0731 19:47:15.826281  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.826625  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:15.826651  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.826861  139843 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/config.json ...
	I0731 19:47:15.827051  139843 start.go:128] duration metric: took 23.994752429s to createHost
	I0731 19:47:15.827073  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHHostname
	I0731 19:47:15.829161  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.829509  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:15.829548  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.829701  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHPort
	I0731 19:47:15.829906  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:47:15.830042  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:47:15.830178  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHUsername
	I0731 19:47:15.830298  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:47:15.830470  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0731 19:47:15.830481  139843 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 19:47:15.938133  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722455235.914439840
	
	I0731 19:47:15.938157  139843 fix.go:216] guest clock: 1722455235.914439840
	I0731 19:47:15.938171  139843 fix.go:229] Guest: 2024-07-31 19:47:15.91443984 +0000 UTC Remote: 2024-07-31 19:47:15.827062034 +0000 UTC m=+77.635992638 (delta=87.377806ms)
	I0731 19:47:15.938192  139843 fix.go:200] guest clock delta is within tolerance: 87.377806ms
	I0731 19:47:15.938200  139843 start.go:83] releasing machines lock for "ha-235073-m02", held for 24.105988261s
	I0731 19:47:15.938242  139843 main.go:141] libmachine: (ha-235073-m02) Calling .DriverName
	I0731 19:47:15.938558  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetIP
	I0731 19:47:15.941151  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.941543  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:15.941571  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.944039  139843 out.go:177] * Found network options:
	I0731 19:47:15.945397  139843 out.go:177]   - NO_PROXY=192.168.39.146
	W0731 19:47:15.946608  139843 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 19:47:15.946636  139843 main.go:141] libmachine: (ha-235073-m02) Calling .DriverName
	I0731 19:47:15.947197  139843 main.go:141] libmachine: (ha-235073-m02) Calling .DriverName
	I0731 19:47:15.947386  139843 main.go:141] libmachine: (ha-235073-m02) Calling .DriverName
	I0731 19:47:15.947523  139843 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 19:47:15.947566  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHHostname
	W0731 19:47:15.947661  139843 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 19:47:15.947756  139843 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 19:47:15.947778  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHHostname
	I0731 19:47:15.950389  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.950650  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.950747  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:15.950776  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.950902  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHPort
	I0731 19:47:15.951004  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:15.951039  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.951085  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:47:15.951174  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHPort
	I0731 19:47:15.951258  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHUsername
	I0731 19:47:15.951323  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:47:15.951396  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02/id_rsa Username:docker}
	I0731 19:47:15.951453  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHUsername
	I0731 19:47:15.951592  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02/id_rsa Username:docker}
	I0731 19:47:16.185751  139843 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 19:47:16.192023  139843 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 19:47:16.192099  139843 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 19:47:16.207389  139843 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 19:47:16.207421  139843 start.go:495] detecting cgroup driver to use...
	I0731 19:47:16.207507  139843 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 19:47:16.224133  139843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 19:47:16.238012  139843 docker.go:217] disabling cri-docker service (if available) ...
	I0731 19:47:16.238072  139843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 19:47:16.251964  139843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 19:47:16.265421  139843 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 19:47:16.396523  139843 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 19:47:16.539052  139843 docker.go:233] disabling docker service ...
	I0731 19:47:16.539153  139843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 19:47:16.553387  139843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 19:47:16.565915  139843 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 19:47:16.698838  139843 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 19:47:16.808112  139843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 19:47:16.821882  139843 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 19:47:16.839748  139843 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 19:47:16.839803  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:47:16.849843  139843 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 19:47:16.849902  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:47:16.860126  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:47:16.870007  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:47:16.880142  139843 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 19:47:16.890299  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:47:16.900152  139843 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:47:16.919452  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:47:16.929097  139843 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 19:47:16.938160  139843 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 19:47:16.938224  139843 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 19:47:16.950582  139843 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 19:47:16.960358  139843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:47:17.071168  139843 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 19:47:17.207181  139843 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 19:47:17.207270  139843 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 19:47:17.212016  139843 start.go:563] Will wait 60s for crictl version
	I0731 19:47:17.212075  139843 ssh_runner.go:195] Run: which crictl
	I0731 19:47:17.215671  139843 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 19:47:17.254175  139843 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 19:47:17.254261  139843 ssh_runner.go:195] Run: crio --version
	I0731 19:47:17.281681  139843 ssh_runner.go:195] Run: crio --version
	I0731 19:47:17.313016  139843 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 19:47:17.314286  139843 out.go:177]   - env NO_PROXY=192.168.39.146
	I0731 19:47:17.315349  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetIP
	I0731 19:47:17.317820  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:17.318162  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:17.318192  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:17.318308  139843 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 19:47:17.322441  139843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 19:47:17.334564  139843 mustload.go:65] Loading cluster: ha-235073
	I0731 19:47:17.334755  139843 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:47:17.335089  139843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:47:17.335139  139843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:47:17.349535  139843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34865
	I0731 19:47:17.349972  139843 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:47:17.350392  139843 main.go:141] libmachine: Using API Version  1
	I0731 19:47:17.350413  139843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:47:17.350744  139843 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:47:17.350931  139843 main.go:141] libmachine: (ha-235073) Calling .GetState
	I0731 19:47:17.352528  139843 host.go:66] Checking if "ha-235073" exists ...
	I0731 19:47:17.352808  139843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:47:17.352840  139843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:47:17.367108  139843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37877
	I0731 19:47:17.367497  139843 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:47:17.367913  139843 main.go:141] libmachine: Using API Version  1
	I0731 19:47:17.367932  139843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:47:17.368270  139843 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:47:17.368442  139843 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:47:17.368586  139843 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073 for IP: 192.168.39.102
	I0731 19:47:17.368598  139843 certs.go:194] generating shared ca certs ...
	I0731 19:47:17.368613  139843 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:47:17.368729  139843 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 19:47:17.368765  139843 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 19:47:17.368774  139843 certs.go:256] generating profile certs ...
	I0731 19:47:17.368842  139843 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/client.key
	I0731 19:47:17.368866  139843 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key.9f43a361
	I0731 19:47:17.368880  139843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt.9f43a361 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.146 192.168.39.102 192.168.39.254]
	I0731 19:47:17.455057  139843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt.9f43a361 ...
	I0731 19:47:17.455086  139843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt.9f43a361: {Name:mkf6dee4ca9d5bbdb847f1e93802c1d5fc8eb860 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:47:17.455250  139843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key.9f43a361 ...
	I0731 19:47:17.455268  139843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key.9f43a361: {Name:mk97519bc18e642aa64f8384b86a970446bea27e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:47:17.455378  139843 certs.go:381] copying /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt.9f43a361 -> /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt
	I0731 19:47:17.455521  139843 certs.go:385] copying /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key.9f43a361 -> /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key
	I0731 19:47:17.455646  139843 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.key
	I0731 19:47:17.455662  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 19:47:17.455676  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 19:47:17.455690  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 19:47:17.455708  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 19:47:17.455720  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 19:47:17.455732  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 19:47:17.455744  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 19:47:17.455757  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 19:47:17.455803  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 19:47:17.455830  139843 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 19:47:17.455840  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 19:47:17.455860  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 19:47:17.455901  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 19:47:17.455922  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 19:47:17.455990  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 19:47:17.456021  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem -> /usr/share/ca-certificates/128891.pem
	I0731 19:47:17.456035  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> /usr/share/ca-certificates/1288912.pem
	I0731 19:47:17.456048  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:47:17.456080  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:47:17.458766  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:47:17.459214  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:47:17.459242  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:47:17.459403  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:47:17.459578  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:47:17.459739  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:47:17.459871  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:47:17.529761  139843 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0731 19:47:17.534900  139843 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0731 19:47:17.545715  139843 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0731 19:47:17.549918  139843 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0731 19:47:17.559479  139843 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0731 19:47:17.563390  139843 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0731 19:47:17.572999  139843 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0731 19:47:17.576983  139843 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0731 19:47:17.586912  139843 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0731 19:47:17.590976  139843 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0731 19:47:17.602229  139843 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0731 19:47:17.606397  139843 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0731 19:47:17.616408  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 19:47:17.644649  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 19:47:17.671633  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 19:47:17.698471  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 19:47:17.725427  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0731 19:47:17.751908  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 19:47:17.778535  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 19:47:17.803064  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 19:47:17.826956  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 19:47:17.853331  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 19:47:17.879425  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 19:47:17.902803  139843 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0731 19:47:17.919217  139843 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0731 19:47:17.935498  139843 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0731 19:47:17.951571  139843 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0731 19:47:17.967478  139843 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0731 19:47:17.983153  139843 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0731 19:47:17.999005  139843 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0731 19:47:18.015041  139843 ssh_runner.go:195] Run: openssl version
	I0731 19:47:18.020704  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 19:47:18.031290  139843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:47:18.035786  139843 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:47:18.035845  139843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:47:18.041498  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 19:47:18.051926  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 19:47:18.062614  139843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 19:47:18.067024  139843 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 19:47:18.067085  139843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 19:47:18.072613  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 19:47:18.082720  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 19:47:18.093102  139843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 19:47:18.097297  139843 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 19:47:18.097500  139843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 19:47:18.102999  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 19:47:18.113248  139843 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 19:47:18.116995  139843 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 19:47:18.117040  139843 kubeadm.go:934] updating node {m02 192.168.39.102 8443 v1.30.3 crio true true} ...
	I0731 19:47:18.117121  139843 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-235073-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-235073 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 19:47:18.117143  139843 kube-vip.go:115] generating kube-vip config ...
	I0731 19:47:18.117170  139843 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 19:47:18.131937  139843 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 19:47:18.131992  139843 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 19:47:18.132031  139843 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 19:47:18.140996  139843 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0731 19:47:18.141038  139843 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0731 19:47:18.149761  139843 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0731 19:47:18.149782  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 19:47:18.149832  139843 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 19:47:18.149913  139843 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19355-121704/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0731 19:47:18.149941  139843 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19355-121704/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0731 19:47:18.154681  139843 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0731 19:47:18.154703  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0731 19:47:51.221441  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 19:47:51.221532  139843 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 19:47:51.227059  139843 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0731 19:47:51.227093  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0731 19:48:27.139335  139843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:48:27.155160  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 19:48:27.155256  139843 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 19:48:27.159527  139843 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0731 19:48:27.159564  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0731 19:48:27.530005  139843 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0731 19:48:27.539327  139843 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0731 19:48:27.555689  139843 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 19:48:27.571458  139843 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0731 19:48:27.587456  139843 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 19:48:27.591145  139843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 19:48:27.602557  139843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:48:27.727589  139843 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 19:48:27.744201  139843 host.go:66] Checking if "ha-235073" exists ...
	I0731 19:48:27.744588  139843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:48:27.744648  139843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:48:27.759789  139843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43675
	I0731 19:48:27.760358  139843 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:48:27.760862  139843 main.go:141] libmachine: Using API Version  1
	I0731 19:48:27.760886  139843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:48:27.761206  139843 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:48:27.761439  139843 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:48:27.761608  139843 start.go:317] joinCluster: &{Name:ha-235073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-235073 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:48:27.761732  139843 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0731 19:48:27.761754  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:48:27.764780  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:48:27.765241  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:48:27.765269  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:48:27.765397  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:48:27.765564  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:48:27.765724  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:48:27.765866  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:48:27.936264  139843 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 19:48:27.936345  139843 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token z9kd5i.x3x4iu01r1g1k8ha --discovery-token-ca-cert-hash sha256:227a67c5218b331dd5f7132ea28d97c9a8049ca33dc9adbb2077964e102f3559 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-235073-m02 --control-plane --apiserver-advertise-address=192.168.39.102 --apiserver-bind-port=8443"
	I0731 19:48:48.949241  139843 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token z9kd5i.x3x4iu01r1g1k8ha --discovery-token-ca-cert-hash sha256:227a67c5218b331dd5f7132ea28d97c9a8049ca33dc9adbb2077964e102f3559 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-235073-m02 --control-plane --apiserver-advertise-address=192.168.39.102 --apiserver-bind-port=8443": (21.012858968s)
	I0731 19:48:48.949285  139843 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0731 19:48:49.508600  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-235073-m02 minikube.k8s.io/updated_at=2024_07_31T19_48_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825 minikube.k8s.io/name=ha-235073 minikube.k8s.io/primary=false
	I0731 19:48:49.663865  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-235073-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0731 19:48:49.802858  139843 start.go:319] duration metric: took 22.041241164s to joinCluster
	I0731 19:48:49.802957  139843 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 19:48:49.803266  139843 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:48:49.804022  139843 out.go:177] * Verifying Kubernetes components...
	I0731 19:48:49.805010  139843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:48:50.074819  139843 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 19:48:50.181665  139843 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 19:48:50.181931  139843 kapi.go:59] client config for ha-235073: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/client.crt", KeyFile:"/home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/client.key", CAFile:"/home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0731 19:48:50.181996  139843 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.146:8443
	I0731 19:48:50.182233  139843 node_ready.go:35] waiting up to 6m0s for node "ha-235073-m02" to be "Ready" ...
	I0731 19:48:50.182335  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:50.182345  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:50.182356  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:50.182363  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:50.191663  139843 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0731 19:48:50.682793  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:50.682821  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:50.682833  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:50.682838  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:50.687341  139843 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 19:48:51.182541  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:51.182570  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:51.182582  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:51.182587  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:51.187103  139843 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 19:48:51.682921  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:51.682943  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:51.682953  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:51.682957  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:51.685841  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:48:52.182664  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:52.182700  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:52.182712  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:52.182718  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:52.186372  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:48:52.187079  139843 node_ready.go:53] node "ha-235073-m02" has status "Ready":"False"
	I0731 19:48:52.683294  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:52.683316  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:52.683325  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:52.683329  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:52.686398  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:48:53.182470  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:53.182492  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:53.182501  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:53.182506  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:53.185744  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:48:53.682762  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:53.682785  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:53.682794  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:53.682798  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:53.686061  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:48:54.183148  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:54.183174  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:54.183184  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:54.183187  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:54.186602  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:48:54.187763  139843 node_ready.go:53] node "ha-235073-m02" has status "Ready":"False"
	I0731 19:48:54.683000  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:54.683025  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:54.683035  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:54.683040  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:54.687691  139843 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 19:48:55.183253  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:55.183282  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:55.183295  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:55.183301  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:55.192413  139843 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0731 19:48:55.682478  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:55.682500  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:55.682508  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:55.682512  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:55.685757  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:48:56.182832  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:56.182855  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:56.182864  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:56.182868  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:56.188401  139843 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 19:48:56.189535  139843 node_ready.go:53] node "ha-235073-m02" has status "Ready":"False"
	I0731 19:48:56.682662  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:56.682693  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:56.682704  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:56.682709  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:56.685692  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:48:57.183288  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:57.183311  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:57.183319  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:57.183323  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:57.186635  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:48:57.683267  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:57.683294  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:57.683306  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:57.683313  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:57.686725  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:48:58.183235  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:58.183258  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:58.183267  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:58.183275  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:58.186592  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:48:58.682467  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:58.682496  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:58.682506  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:58.682510  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:58.685493  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:48:58.686033  139843 node_ready.go:53] node "ha-235073-m02" has status "Ready":"False"
	I0731 19:48:59.183427  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:59.183452  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:59.183461  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:59.183467  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:59.186875  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:48:59.682808  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:59.682833  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:59.682844  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:59.682853  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:59.686486  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:00.182531  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:00.182555  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:00.182568  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:00.182577  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:00.185757  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:00.682480  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:00.682502  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:00.682510  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:00.682514  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:00.686623  139843 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 19:49:00.687179  139843 node_ready.go:53] node "ha-235073-m02" has status "Ready":"False"
	I0731 19:49:01.182674  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:01.182697  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:01.182705  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:01.182709  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:01.185990  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:01.682578  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:01.682601  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:01.682610  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:01.682614  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:01.685844  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:02.182527  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:02.182551  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:02.182562  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:02.182568  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:02.185966  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:02.683121  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:02.683145  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:02.683155  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:02.683162  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:02.686333  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:03.182831  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:03.182854  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:03.182863  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:03.182867  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:03.188529  139843 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 19:49:03.189001  139843 node_ready.go:53] node "ha-235073-m02" has status "Ready":"False"
	I0731 19:49:03.682842  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:03.682868  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:03.682877  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:03.682883  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:03.686425  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:04.182502  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:04.182528  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:04.182539  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:04.182545  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:04.185626  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:04.682475  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:04.682505  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:04.682514  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:04.682517  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:04.685643  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:05.182936  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:05.182964  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:05.182976  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:05.182982  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:05.188162  139843 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 19:49:05.683270  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:05.683293  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:05.683301  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:05.683306  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:05.686816  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:05.687348  139843 node_ready.go:53] node "ha-235073-m02" has status "Ready":"False"
	I0731 19:49:06.182631  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:06.182660  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:06.182671  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:06.182677  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:06.186152  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:06.682905  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:06.682930  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:06.682940  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:06.682945  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:06.686236  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:07.182723  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:07.182748  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:07.182757  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:07.182760  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:07.185925  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:07.682446  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:07.682472  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:07.682484  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:07.682489  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:07.686012  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:08.183448  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:08.183471  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:08.183487  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:08.183494  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:08.186821  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:08.187289  139843 node_ready.go:53] node "ha-235073-m02" has status "Ready":"False"
	I0731 19:49:08.683467  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:08.683494  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:08.683506  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:08.683510  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:08.688092  139843 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 19:49:08.688619  139843 node_ready.go:49] node "ha-235073-m02" has status "Ready":"True"
	I0731 19:49:08.688638  139843 node_ready.go:38] duration metric: took 18.506386927s for node "ha-235073-m02" to be "Ready" ...
	I0731 19:49:08.688649  139843 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 19:49:08.688757  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods
	I0731 19:49:08.688769  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:08.688779  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:08.688784  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:08.697786  139843 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0731 19:49:08.704037  139843 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-d2w7q" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:08.704140  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-d2w7q
	I0731 19:49:08.704150  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:08.704161  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:08.704166  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:08.707321  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:08.707967  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:49:08.707981  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:08.707992  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:08.707999  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:08.710402  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:49:08.711104  139843 pod_ready.go:92] pod "coredns-7db6d8ff4d-d2w7q" in "kube-system" namespace has status "Ready":"True"
	I0731 19:49:08.711120  139843 pod_ready.go:81] duration metric: took 7.059182ms for pod "coredns-7db6d8ff4d-d2w7q" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:08.711128  139843 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-f7dzt" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:08.711186  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-f7dzt
	I0731 19:49:08.711194  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:08.711201  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:08.711205  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:08.713629  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:49:08.714392  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:49:08.714406  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:08.714415  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:08.714421  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:08.716417  139843 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 19:49:08.716975  139843 pod_ready.go:92] pod "coredns-7db6d8ff4d-f7dzt" in "kube-system" namespace has status "Ready":"True"
	I0731 19:49:08.716988  139843 pod_ready.go:81] duration metric: took 5.853322ms for pod "coredns-7db6d8ff4d-f7dzt" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:08.716996  139843 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-235073" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:08.717042  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/etcd-ha-235073
	I0731 19:49:08.717049  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:08.717055  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:08.717061  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:08.719192  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:49:08.719747  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:49:08.719759  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:08.719766  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:08.719769  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:08.721906  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:49:08.722367  139843 pod_ready.go:92] pod "etcd-ha-235073" in "kube-system" namespace has status "Ready":"True"
	I0731 19:49:08.722382  139843 pod_ready.go:81] duration metric: took 5.378826ms for pod "etcd-ha-235073" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:08.722389  139843 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-235073-m02" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:08.722444  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/etcd-ha-235073-m02
	I0731 19:49:08.722452  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:08.722459  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:08.722465  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:08.724963  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:49:08.725586  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:08.725599  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:08.725609  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:08.725615  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:08.728137  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:49:08.728690  139843 pod_ready.go:92] pod "etcd-ha-235073-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 19:49:08.728705  139843 pod_ready.go:81] duration metric: took 6.304389ms for pod "etcd-ha-235073-m02" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:08.728722  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-235073" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:08.884091  139843 request.go:629] Waited for 155.305049ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-235073
	I0731 19:49:08.884168  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-235073
	I0731 19:49:08.884174  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:08.884181  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:08.884187  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:08.887154  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:49:09.084308  139843 request.go:629] Waited for 196.394242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:49:09.084406  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:49:09.084420  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:09.084435  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:09.084438  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:09.087812  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:09.088349  139843 pod_ready.go:92] pod "kube-apiserver-ha-235073" in "kube-system" namespace has status "Ready":"True"
	I0731 19:49:09.088366  139843 pod_ready.go:81] duration metric: took 359.636272ms for pod "kube-apiserver-ha-235073" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:09.088375  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-235073-m02" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:09.284497  139843 request.go:629] Waited for 196.040622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-235073-m02
	I0731 19:49:09.284573  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-235073-m02
	I0731 19:49:09.284581  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:09.284592  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:09.284597  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:09.287868  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:09.483936  139843 request.go:629] Waited for 195.401913ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:09.484018  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:09.484027  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:09.484035  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:09.484039  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:09.487009  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:49:09.487559  139843 pod_ready.go:92] pod "kube-apiserver-ha-235073-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 19:49:09.487579  139843 pod_ready.go:81] duration metric: took 399.197759ms for pod "kube-apiserver-ha-235073-m02" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:09.487589  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-235073" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:09.683559  139843 request.go:629] Waited for 195.899757ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-235073
	I0731 19:49:09.683621  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-235073
	I0731 19:49:09.683626  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:09.683633  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:09.683638  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:09.686902  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:09.883816  139843 request.go:629] Waited for 196.334103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:49:09.883901  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:49:09.883927  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:09.883943  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:09.883953  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:09.887473  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:09.888107  139843 pod_ready.go:92] pod "kube-controller-manager-ha-235073" in "kube-system" namespace has status "Ready":"True"
	I0731 19:49:09.888127  139843 pod_ready.go:81] duration metric: took 400.528979ms for pod "kube-controller-manager-ha-235073" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:09.888137  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-235073-m02" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:10.084326  139843 request.go:629] Waited for 196.105395ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-235073-m02
	I0731 19:49:10.084406  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-235073-m02
	I0731 19:49:10.084411  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:10.084419  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:10.084423  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:10.087939  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:10.284454  139843 request.go:629] Waited for 195.387188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:10.284515  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:10.284520  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:10.284527  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:10.284533  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:10.287832  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:10.288445  139843 pod_ready.go:92] pod "kube-controller-manager-ha-235073-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 19:49:10.288468  139843 pod_ready.go:81] duration metric: took 400.320918ms for pod "kube-controller-manager-ha-235073-m02" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:10.288480  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4g5ws" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:10.484479  139843 request.go:629] Waited for 195.907591ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4g5ws
	I0731 19:49:10.484548  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4g5ws
	I0731 19:49:10.484553  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:10.484561  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:10.484568  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:10.487734  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:10.683655  139843 request.go:629] Waited for 195.293136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:10.683735  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:10.683741  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:10.683749  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:10.683755  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:10.687449  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:10.688030  139843 pod_ready.go:92] pod "kube-proxy-4g5ws" in "kube-system" namespace has status "Ready":"True"
	I0731 19:49:10.688052  139843 pod_ready.go:81] duration metric: took 399.565448ms for pod "kube-proxy-4g5ws" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:10.688062  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-td8j2" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:10.884272  139843 request.go:629] Waited for 196.128002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-proxy-td8j2
	I0731 19:49:10.884374  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-proxy-td8j2
	I0731 19:49:10.884386  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:10.884397  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:10.884403  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:10.889281  139843 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 19:49:11.084223  139843 request.go:629] Waited for 194.075007ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:49:11.084294  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:49:11.084301  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:11.084312  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:11.084317  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:11.088127  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:11.088856  139843 pod_ready.go:92] pod "kube-proxy-td8j2" in "kube-system" namespace has status "Ready":"True"
	I0731 19:49:11.088878  139843 pod_ready.go:81] duration metric: took 400.81028ms for pod "kube-proxy-td8j2" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:11.088890  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-235073" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:11.283908  139843 request.go:629] Waited for 194.922818ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-235073
	I0731 19:49:11.283982  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-235073
	I0731 19:49:11.283991  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:11.283999  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:11.284009  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:11.287332  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:11.484291  139843 request.go:629] Waited for 196.398574ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:49:11.484376  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:49:11.484387  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:11.484420  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:11.484434  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:11.487627  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:11.488525  139843 pod_ready.go:92] pod "kube-scheduler-ha-235073" in "kube-system" namespace has status "Ready":"True"
	I0731 19:49:11.488542  139843 pod_ready.go:81] duration metric: took 399.646685ms for pod "kube-scheduler-ha-235073" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:11.488552  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-235073-m02" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:11.683616  139843 request.go:629] Waited for 194.979494ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-235073-m02
	I0731 19:49:11.683682  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-235073-m02
	I0731 19:49:11.683687  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:11.683694  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:11.683698  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:11.687195  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:11.884223  139843 request.go:629] Waited for 196.369854ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:11.884304  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:11.884309  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:11.884317  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:11.884323  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:11.887835  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:11.888327  139843 pod_ready.go:92] pod "kube-scheduler-ha-235073-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 19:49:11.888346  139843 pod_ready.go:81] duration metric: took 399.788033ms for pod "kube-scheduler-ha-235073-m02" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:11.888359  139843 pod_ready.go:38] duration metric: took 3.199672771s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 19:49:11.888412  139843 api_server.go:52] waiting for apiserver process to appear ...
	I0731 19:49:11.888475  139843 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:49:11.904507  139843 api_server.go:72] duration metric: took 22.101506607s to wait for apiserver process to appear ...
	I0731 19:49:11.904533  139843 api_server.go:88] waiting for apiserver healthz status ...
	I0731 19:49:11.904555  139843 api_server.go:253] Checking apiserver healthz at https://192.168.39.146:8443/healthz ...
	I0731 19:49:11.908571  139843 api_server.go:279] https://192.168.39.146:8443/healthz returned 200:
	ok
	I0731 19:49:11.908648  139843 round_trippers.go:463] GET https://192.168.39.146:8443/version
	I0731 19:49:11.908660  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:11.908669  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:11.908676  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:11.909351  139843 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0731 19:49:11.909491  139843 api_server.go:141] control plane version: v1.30.3
	I0731 19:49:11.909510  139843 api_server.go:131] duration metric: took 4.971291ms to wait for apiserver health ...
	I0731 19:49:11.909517  139843 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 19:49:12.083986  139843 request.go:629] Waited for 174.378836ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods
	I0731 19:49:12.084066  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods
	I0731 19:49:12.084073  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:12.084087  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:12.084095  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:12.089366  139843 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 19:49:12.094198  139843 system_pods.go:59] 17 kube-system pods found
	I0731 19:49:12.094244  139843 system_pods.go:61] "coredns-7db6d8ff4d-d2w7q" [c47597b4-a38b-438c-9c3b-8f7f45130f75] Running
	I0731 19:49:12.094251  139843 system_pods.go:61] "coredns-7db6d8ff4d-f7dzt" [9549b5d7-bb23-4934-883b-dd07f8d864d8] Running
	I0731 19:49:12.094255  139843 system_pods.go:61] "etcd-ha-235073" [ef927139-ead6-413d-b0cd-beb931fc4700] Running
	I0731 19:49:12.094258  139843 system_pods.go:61] "etcd-ha-235073-m02" [2bc3b6c8-c8de-42c0-a752-302d07433ebc] Running
	I0731 19:49:12.094262  139843 system_pods.go:61] "kindnet-6mpsn" [1910ba34-e9d4-4fb4-9f2b-6b00dad3a3ef] Running
	I0731 19:49:12.094265  139843 system_pods.go:61] "kindnet-v5g92" [c8020666-5376-4bdf-a9a3-d10b67fc04a9] Running
	I0731 19:49:12.094268  139843 system_pods.go:61] "kube-apiserver-ha-235073" [c7da5168-cd07-4660-91a7-f25bf44db28e] Running
	I0731 19:49:12.094271  139843 system_pods.go:61] "kube-apiserver-ha-235073-m02" [bb498dc0-7bea-4f44-b6ea-0b66122d8205] Running
	I0731 19:49:12.094274  139843 system_pods.go:61] "kube-controller-manager-ha-235073" [1d7ad140-888f-4863-aa09-0651eae569a7] Running
	I0731 19:49:12.094278  139843 system_pods.go:61] "kube-controller-manager-ha-235073-m02" [7d1e23f4-1609-476f-b30e-1e18d291ca4c] Running
	I0731 19:49:12.094281  139843 system_pods.go:61] "kube-proxy-4g5ws" [681015ee-d7ba-460f-a593-0152df2b065d] Running
	I0731 19:49:12.094284  139843 system_pods.go:61] "kube-proxy-td8j2" [b836edfa-4df1-40e4-a58a-3f23afd5b78b] Running
	I0731 19:49:12.094287  139843 system_pods.go:61] "kube-scheduler-ha-235073" [597d51e9-b674-4b7f-b104-6e8808a5d593] Running
	I0731 19:49:12.094290  139843 system_pods.go:61] "kube-scheduler-ha-235073-m02" [84f686e7-4317-41b4-8064-621a7fa7ade8] Running
	I0731 19:49:12.094293  139843 system_pods.go:61] "kube-vip-ha-235073" [f28e113e-7c11-4a00-a8cb-fb5527042343] Running
	I0731 19:49:12.094296  139843 system_pods.go:61] "kube-vip-ha-235073-m02" [4f387765-627c-49e4-9fce-eae672099a6d] Running
	I0731 19:49:12.094299  139843 system_pods.go:61] "storage-provisioner" [9cd9bb70-badc-4b4b-a135-62644edac7dd] Running
	I0731 19:49:12.094307  139843 system_pods.go:74] duration metric: took 184.784656ms to wait for pod list to return data ...
	I0731 19:49:12.094318  139843 default_sa.go:34] waiting for default service account to be created ...
	I0731 19:49:12.283677  139843 request.go:629] Waited for 189.279048ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/default/serviceaccounts
	I0731 19:49:12.283743  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/default/serviceaccounts
	I0731 19:49:12.283754  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:12.283768  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:12.283775  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:12.286897  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:12.287132  139843 default_sa.go:45] found service account: "default"
	I0731 19:49:12.287149  139843 default_sa.go:55] duration metric: took 192.825253ms for default service account to be created ...
	I0731 19:49:12.287158  139843 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 19:49:12.484179  139843 request.go:629] Waited for 196.944899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods
	I0731 19:49:12.484243  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods
	I0731 19:49:12.484248  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:12.484264  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:12.484268  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:12.491731  139843 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 19:49:12.496731  139843 system_pods.go:86] 17 kube-system pods found
	I0731 19:49:12.496757  139843 system_pods.go:89] "coredns-7db6d8ff4d-d2w7q" [c47597b4-a38b-438c-9c3b-8f7f45130f75] Running
	I0731 19:49:12.496763  139843 system_pods.go:89] "coredns-7db6d8ff4d-f7dzt" [9549b5d7-bb23-4934-883b-dd07f8d864d8] Running
	I0731 19:49:12.496768  139843 system_pods.go:89] "etcd-ha-235073" [ef927139-ead6-413d-b0cd-beb931fc4700] Running
	I0731 19:49:12.496772  139843 system_pods.go:89] "etcd-ha-235073-m02" [2bc3b6c8-c8de-42c0-a752-302d07433ebc] Running
	I0731 19:49:12.496776  139843 system_pods.go:89] "kindnet-6mpsn" [1910ba34-e9d4-4fb4-9f2b-6b00dad3a3ef] Running
	I0731 19:49:12.496780  139843 system_pods.go:89] "kindnet-v5g92" [c8020666-5376-4bdf-a9a3-d10b67fc04a9] Running
	I0731 19:49:12.496784  139843 system_pods.go:89] "kube-apiserver-ha-235073" [c7da5168-cd07-4660-91a7-f25bf44db28e] Running
	I0731 19:49:12.496788  139843 system_pods.go:89] "kube-apiserver-ha-235073-m02" [bb498dc0-7bea-4f44-b6ea-0b66122d8205] Running
	I0731 19:49:12.496792  139843 system_pods.go:89] "kube-controller-manager-ha-235073" [1d7ad140-888f-4863-aa09-0651eae569a7] Running
	I0731 19:49:12.496796  139843 system_pods.go:89] "kube-controller-manager-ha-235073-m02" [7d1e23f4-1609-476f-b30e-1e18d291ca4c] Running
	I0731 19:49:12.496800  139843 system_pods.go:89] "kube-proxy-4g5ws" [681015ee-d7ba-460f-a593-0152df2b065d] Running
	I0731 19:49:12.496806  139843 system_pods.go:89] "kube-proxy-td8j2" [b836edfa-4df1-40e4-a58a-3f23afd5b78b] Running
	I0731 19:49:12.496812  139843 system_pods.go:89] "kube-scheduler-ha-235073" [597d51e9-b674-4b7f-b104-6e8808a5d593] Running
	I0731 19:49:12.496817  139843 system_pods.go:89] "kube-scheduler-ha-235073-m02" [84f686e7-4317-41b4-8064-621a7fa7ade8] Running
	I0731 19:49:12.496821  139843 system_pods.go:89] "kube-vip-ha-235073" [f28e113e-7c11-4a00-a8cb-fb5527042343] Running
	I0731 19:49:12.496824  139843 system_pods.go:89] "kube-vip-ha-235073-m02" [4f387765-627c-49e4-9fce-eae672099a6d] Running
	I0731 19:49:12.496828  139843 system_pods.go:89] "storage-provisioner" [9cd9bb70-badc-4b4b-a135-62644edac7dd] Running
	I0731 19:49:12.496834  139843 system_pods.go:126] duration metric: took 209.666593ms to wait for k8s-apps to be running ...
	I0731 19:49:12.496844  139843 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 19:49:12.496889  139843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:49:12.512189  139843 system_svc.go:56] duration metric: took 15.336404ms WaitForService to wait for kubelet
	I0731 19:49:12.512220  139843 kubeadm.go:582] duration metric: took 22.709226064s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 19:49:12.512246  139843 node_conditions.go:102] verifying NodePressure condition ...
	I0731 19:49:12.683605  139843 request.go:629] Waited for 171.261957ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes
	I0731 19:49:12.683673  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes
	I0731 19:49:12.683680  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:12.683690  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:12.683700  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:12.688391  139843 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 19:49:12.689404  139843 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 19:49:12.689428  139843 node_conditions.go:123] node cpu capacity is 2
	I0731 19:49:12.689444  139843 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 19:49:12.689449  139843 node_conditions.go:123] node cpu capacity is 2
	I0731 19:49:12.689455  139843 node_conditions.go:105] duration metric: took 177.202999ms to run NodePressure ...
	I0731 19:49:12.689470  139843 start.go:241] waiting for startup goroutines ...
	I0731 19:49:12.689524  139843 start.go:255] writing updated cluster config ...
	I0731 19:49:12.691548  139843 out.go:177] 
	I0731 19:49:12.693025  139843 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:49:12.693123  139843 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/config.json ...
	I0731 19:49:12.694848  139843 out.go:177] * Starting "ha-235073-m03" control-plane node in "ha-235073" cluster
	I0731 19:49:12.696075  139843 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 19:49:12.696105  139843 cache.go:56] Caching tarball of preloaded images
	I0731 19:49:12.696223  139843 preload.go:172] Found /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 19:49:12.696239  139843 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 19:49:12.696324  139843 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/config.json ...
	I0731 19:49:12.696503  139843 start.go:360] acquireMachinesLock for ha-235073-m03: {Name:mk0ee20c9dba367bb5e62f6affdfe6f589095d2a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 19:49:12.696553  139843 start.go:364] duration metric: took 32.257µs to acquireMachinesLock for "ha-235073-m03"
	I0731 19:49:12.696571  139843 start.go:93] Provisioning new machine with config: &{Name:ha-235073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-235073 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 19:49:12.696707  139843 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0731 19:49:12.698277  139843 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 19:49:12.698378  139843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:49:12.698426  139843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:49:12.713698  139843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45639
	I0731 19:49:12.714190  139843 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:49:12.714624  139843 main.go:141] libmachine: Using API Version  1
	I0731 19:49:12.714644  139843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:49:12.715070  139843 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:49:12.715255  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetMachineName
	I0731 19:49:12.715546  139843 main.go:141] libmachine: (ha-235073-m03) Calling .DriverName
	I0731 19:49:12.715767  139843 start.go:159] libmachine.API.Create for "ha-235073" (driver="kvm2")
	I0731 19:49:12.715795  139843 client.go:168] LocalClient.Create starting
	I0731 19:49:12.715823  139843 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem
	I0731 19:49:12.715855  139843 main.go:141] libmachine: Decoding PEM data...
	I0731 19:49:12.715871  139843 main.go:141] libmachine: Parsing certificate...
	I0731 19:49:12.715923  139843 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem
	I0731 19:49:12.715943  139843 main.go:141] libmachine: Decoding PEM data...
	I0731 19:49:12.715953  139843 main.go:141] libmachine: Parsing certificate...
	I0731 19:49:12.715969  139843 main.go:141] libmachine: Running pre-create checks...
	I0731 19:49:12.715977  139843 main.go:141] libmachine: (ha-235073-m03) Calling .PreCreateCheck
	I0731 19:49:12.716157  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetConfigRaw
	I0731 19:49:12.716568  139843 main.go:141] libmachine: Creating machine...
	I0731 19:49:12.716581  139843 main.go:141] libmachine: (ha-235073-m03) Calling .Create
	I0731 19:49:12.716737  139843 main.go:141] libmachine: (ha-235073-m03) Creating KVM machine...
	I0731 19:49:12.717976  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found existing default KVM network
	I0731 19:49:12.718141  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found existing private KVM network mk-ha-235073
	I0731 19:49:12.718291  139843 main.go:141] libmachine: (ha-235073-m03) Setting up store path in /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03 ...
	I0731 19:49:12.718315  139843 main.go:141] libmachine: (ha-235073-m03) Building disk image from file:///home/jenkins/minikube-integration/19355-121704/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0731 19:49:12.718343  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:12.718271  140882 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 19:49:12.718429  139843 main.go:141] libmachine: (ha-235073-m03) Downloading /home/jenkins/minikube-integration/19355-121704/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19355-121704/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0731 19:49:12.963627  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:12.963488  140882 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/id_rsa...
	I0731 19:49:13.195137  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:13.194998  140882 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/ha-235073-m03.rawdisk...
	I0731 19:49:13.195171  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Writing magic tar header
	I0731 19:49:13.195182  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Writing SSH key tar header
	I0731 19:49:13.195192  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:13.195108  140882 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03 ...
	I0731 19:49:13.195256  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03
	I0731 19:49:13.195298  139843 main.go:141] libmachine: (ha-235073-m03) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03 (perms=drwx------)
	I0731 19:49:13.195311  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704/.minikube/machines
	I0731 19:49:13.195318  139843 main.go:141] libmachine: (ha-235073-m03) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704/.minikube/machines (perms=drwxr-xr-x)
	I0731 19:49:13.195329  139843 main.go:141] libmachine: (ha-235073-m03) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704/.minikube (perms=drwxr-xr-x)
	I0731 19:49:13.195337  139843 main.go:141] libmachine: (ha-235073-m03) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704 (perms=drwxrwxr-x)
	I0731 19:49:13.195352  139843 main.go:141] libmachine: (ha-235073-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 19:49:13.195386  139843 main.go:141] libmachine: (ha-235073-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 19:49:13.195393  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 19:49:13.195417  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704
	I0731 19:49:13.195423  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 19:49:13.195433  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Checking permissions on dir: /home/jenkins
	I0731 19:49:13.195444  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Checking permissions on dir: /home
	I0731 19:49:13.195453  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Skipping /home - not owner
	I0731 19:49:13.195488  139843 main.go:141] libmachine: (ha-235073-m03) Creating domain...
	I0731 19:49:13.196312  139843 main.go:141] libmachine: (ha-235073-m03) define libvirt domain using xml: 
	I0731 19:49:13.196333  139843 main.go:141] libmachine: (ha-235073-m03) <domain type='kvm'>
	I0731 19:49:13.196344  139843 main.go:141] libmachine: (ha-235073-m03)   <name>ha-235073-m03</name>
	I0731 19:49:13.196360  139843 main.go:141] libmachine: (ha-235073-m03)   <memory unit='MiB'>2200</memory>
	I0731 19:49:13.196370  139843 main.go:141] libmachine: (ha-235073-m03)   <vcpu>2</vcpu>
	I0731 19:49:13.196380  139843 main.go:141] libmachine: (ha-235073-m03)   <features>
	I0731 19:49:13.196390  139843 main.go:141] libmachine: (ha-235073-m03)     <acpi/>
	I0731 19:49:13.196403  139843 main.go:141] libmachine: (ha-235073-m03)     <apic/>
	I0731 19:49:13.196414  139843 main.go:141] libmachine: (ha-235073-m03)     <pae/>
	I0731 19:49:13.196424  139843 main.go:141] libmachine: (ha-235073-m03)     
	I0731 19:49:13.196433  139843 main.go:141] libmachine: (ha-235073-m03)   </features>
	I0731 19:49:13.196443  139843 main.go:141] libmachine: (ha-235073-m03)   <cpu mode='host-passthrough'>
	I0731 19:49:13.196451  139843 main.go:141] libmachine: (ha-235073-m03)   
	I0731 19:49:13.196461  139843 main.go:141] libmachine: (ha-235073-m03)   </cpu>
	I0731 19:49:13.196469  139843 main.go:141] libmachine: (ha-235073-m03)   <os>
	I0731 19:49:13.196479  139843 main.go:141] libmachine: (ha-235073-m03)     <type>hvm</type>
	I0731 19:49:13.196492  139843 main.go:141] libmachine: (ha-235073-m03)     <boot dev='cdrom'/>
	I0731 19:49:13.196506  139843 main.go:141] libmachine: (ha-235073-m03)     <boot dev='hd'/>
	I0731 19:49:13.196517  139843 main.go:141] libmachine: (ha-235073-m03)     <bootmenu enable='no'/>
	I0731 19:49:13.196527  139843 main.go:141] libmachine: (ha-235073-m03)   </os>
	I0731 19:49:13.196535  139843 main.go:141] libmachine: (ha-235073-m03)   <devices>
	I0731 19:49:13.196543  139843 main.go:141] libmachine: (ha-235073-m03)     <disk type='file' device='cdrom'>
	I0731 19:49:13.196555  139843 main.go:141] libmachine: (ha-235073-m03)       <source file='/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/boot2docker.iso'/>
	I0731 19:49:13.196567  139843 main.go:141] libmachine: (ha-235073-m03)       <target dev='hdc' bus='scsi'/>
	I0731 19:49:13.196583  139843 main.go:141] libmachine: (ha-235073-m03)       <readonly/>
	I0731 19:49:13.196593  139843 main.go:141] libmachine: (ha-235073-m03)     </disk>
	I0731 19:49:13.196609  139843 main.go:141] libmachine: (ha-235073-m03)     <disk type='file' device='disk'>
	I0731 19:49:13.196621  139843 main.go:141] libmachine: (ha-235073-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 19:49:13.196636  139843 main.go:141] libmachine: (ha-235073-m03)       <source file='/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/ha-235073-m03.rawdisk'/>
	I0731 19:49:13.196657  139843 main.go:141] libmachine: (ha-235073-m03)       <target dev='hda' bus='virtio'/>
	I0731 19:49:13.196669  139843 main.go:141] libmachine: (ha-235073-m03)     </disk>
	I0731 19:49:13.196679  139843 main.go:141] libmachine: (ha-235073-m03)     <interface type='network'>
	I0731 19:49:13.196691  139843 main.go:141] libmachine: (ha-235073-m03)       <source network='mk-ha-235073'/>
	I0731 19:49:13.196700  139843 main.go:141] libmachine: (ha-235073-m03)       <model type='virtio'/>
	I0731 19:49:13.196706  139843 main.go:141] libmachine: (ha-235073-m03)     </interface>
	I0731 19:49:13.196713  139843 main.go:141] libmachine: (ha-235073-m03)     <interface type='network'>
	I0731 19:49:13.196735  139843 main.go:141] libmachine: (ha-235073-m03)       <source network='default'/>
	I0731 19:49:13.196759  139843 main.go:141] libmachine: (ha-235073-m03)       <model type='virtio'/>
	I0731 19:49:13.196769  139843 main.go:141] libmachine: (ha-235073-m03)     </interface>
	I0731 19:49:13.196787  139843 main.go:141] libmachine: (ha-235073-m03)     <serial type='pty'>
	I0731 19:49:13.196793  139843 main.go:141] libmachine: (ha-235073-m03)       <target port='0'/>
	I0731 19:49:13.196798  139843 main.go:141] libmachine: (ha-235073-m03)     </serial>
	I0731 19:49:13.196806  139843 main.go:141] libmachine: (ha-235073-m03)     <console type='pty'>
	I0731 19:49:13.196816  139843 main.go:141] libmachine: (ha-235073-m03)       <target type='serial' port='0'/>
	I0731 19:49:13.196828  139843 main.go:141] libmachine: (ha-235073-m03)     </console>
	I0731 19:49:13.196838  139843 main.go:141] libmachine: (ha-235073-m03)     <rng model='virtio'>
	I0731 19:49:13.196848  139843 main.go:141] libmachine: (ha-235073-m03)       <backend model='random'>/dev/random</backend>
	I0731 19:49:13.196858  139843 main.go:141] libmachine: (ha-235073-m03)     </rng>
	I0731 19:49:13.196866  139843 main.go:141] libmachine: (ha-235073-m03)     
	I0731 19:49:13.196874  139843 main.go:141] libmachine: (ha-235073-m03)     
	I0731 19:49:13.196880  139843 main.go:141] libmachine: (ha-235073-m03)   </devices>
	I0731 19:49:13.196886  139843 main.go:141] libmachine: (ha-235073-m03) </domain>
	I0731 19:49:13.196896  139843 main.go:141] libmachine: (ha-235073-m03) 
	I0731 19:49:13.203712  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:f2:41:ab in network default
	I0731 19:49:13.204272  139843 main.go:141] libmachine: (ha-235073-m03) Ensuring networks are active...
	I0731 19:49:13.204294  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:13.204977  139843 main.go:141] libmachine: (ha-235073-m03) Ensuring network default is active
	I0731 19:49:13.205229  139843 main.go:141] libmachine: (ha-235073-m03) Ensuring network mk-ha-235073 is active
	I0731 19:49:13.205590  139843 main.go:141] libmachine: (ha-235073-m03) Getting domain xml...
	I0731 19:49:13.206439  139843 main.go:141] libmachine: (ha-235073-m03) Creating domain...
	I0731 19:49:14.417702  139843 main.go:141] libmachine: (ha-235073-m03) Waiting to get IP...
	I0731 19:49:14.418375  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:14.418792  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find current IP address of domain ha-235073-m03 in network mk-ha-235073
	I0731 19:49:14.418838  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:14.418768  140882 retry.go:31] will retry after 301.990056ms: waiting for machine to come up
	I0731 19:49:14.722399  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:14.722867  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find current IP address of domain ha-235073-m03 in network mk-ha-235073
	I0731 19:49:14.722894  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:14.722810  140882 retry.go:31] will retry after 380.1158ms: waiting for machine to come up
	I0731 19:49:15.104470  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:15.104900  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find current IP address of domain ha-235073-m03 in network mk-ha-235073
	I0731 19:49:15.104928  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:15.104857  140882 retry.go:31] will retry after 481.472336ms: waiting for machine to come up
	I0731 19:49:15.587436  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:15.587814  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find current IP address of domain ha-235073-m03 in network mk-ha-235073
	I0731 19:49:15.587844  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:15.587775  140882 retry.go:31] will retry after 446.282461ms: waiting for machine to come up
	I0731 19:49:16.035180  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:16.035583  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find current IP address of domain ha-235073-m03 in network mk-ha-235073
	I0731 19:49:16.035610  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:16.035535  140882 retry.go:31] will retry after 637.584414ms: waiting for machine to come up
	I0731 19:49:16.674897  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:16.675311  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find current IP address of domain ha-235073-m03 in network mk-ha-235073
	I0731 19:49:16.675336  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:16.675266  140882 retry.go:31] will retry after 740.193685ms: waiting for machine to come up
	I0731 19:49:17.417075  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:17.417538  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find current IP address of domain ha-235073-m03 in network mk-ha-235073
	I0731 19:49:17.417571  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:17.417475  140882 retry.go:31] will retry after 931.617013ms: waiting for machine to come up
	I0731 19:49:18.350335  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:18.350809  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find current IP address of domain ha-235073-m03 in network mk-ha-235073
	I0731 19:49:18.350835  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:18.350786  140882 retry.go:31] will retry after 1.145262324s: waiting for machine to come up
	I0731 19:49:19.498024  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:19.498539  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find current IP address of domain ha-235073-m03 in network mk-ha-235073
	I0731 19:49:19.498564  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:19.498490  140882 retry.go:31] will retry after 1.70182596s: waiting for machine to come up
	I0731 19:49:21.201440  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:21.201898  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find current IP address of domain ha-235073-m03 in network mk-ha-235073
	I0731 19:49:21.201926  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:21.201850  140882 retry.go:31] will retry after 2.005317649s: waiting for machine to come up
	I0731 19:49:23.209062  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:23.209764  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find current IP address of domain ha-235073-m03 in network mk-ha-235073
	I0731 19:49:23.209812  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:23.209708  140882 retry.go:31] will retry after 2.130232319s: waiting for machine to come up
	I0731 19:49:25.342820  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:25.343281  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find current IP address of domain ha-235073-m03 in network mk-ha-235073
	I0731 19:49:25.343310  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:25.343241  140882 retry.go:31] will retry after 2.512740406s: waiting for machine to come up
	I0731 19:49:27.857598  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:27.858125  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find current IP address of domain ha-235073-m03 in network mk-ha-235073
	I0731 19:49:27.858156  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:27.858085  140882 retry.go:31] will retry after 4.435303382s: waiting for machine to come up
	I0731 19:49:32.298335  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:32.298703  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find current IP address of domain ha-235073-m03 in network mk-ha-235073
	I0731 19:49:32.298730  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:32.298654  140882 retry.go:31] will retry after 4.668024043s: waiting for machine to come up
	I0731 19:49:36.970540  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:36.971105  139843 main.go:141] libmachine: (ha-235073-m03) Found IP for machine: 192.168.39.136
	I0731 19:49:36.971129  139843 main.go:141] libmachine: (ha-235073-m03) Reserving static IP address...
	I0731 19:49:36.971143  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has current primary IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:36.971532  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find host DHCP lease matching {name: "ha-235073-m03", mac: "52:54:00:6d:fb:8e", ip: "192.168.39.136"} in network mk-ha-235073
	I0731 19:49:37.046651  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Getting to WaitForSSH function...
	I0731 19:49:37.046684  139843 main.go:141] libmachine: (ha-235073-m03) Reserved static IP address: 192.168.39.136
	I0731 19:49:37.046697  139843 main.go:141] libmachine: (ha-235073-m03) Waiting for SSH to be available...
	I0731 19:49:37.049355  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:37.049693  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073
	I0731 19:49:37.049734  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find defined IP address of network mk-ha-235073 interface with MAC address 52:54:00:6d:fb:8e
	I0731 19:49:37.049874  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Using SSH client type: external
	I0731 19:49:37.049900  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/id_rsa (-rw-------)
	I0731 19:49:37.049953  139843 main.go:141] libmachine: (ha-235073-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 19:49:37.049983  139843 main.go:141] libmachine: (ha-235073-m03) DBG | About to run SSH command:
	I0731 19:49:37.050000  139843 main.go:141] libmachine: (ha-235073-m03) DBG | exit 0
	I0731 19:49:37.053802  139843 main.go:141] libmachine: (ha-235073-m03) DBG | SSH cmd err, output: exit status 255: 
	I0731 19:49:37.053828  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0731 19:49:37.053839  139843 main.go:141] libmachine: (ha-235073-m03) DBG | command : exit 0
	I0731 19:49:37.053844  139843 main.go:141] libmachine: (ha-235073-m03) DBG | err     : exit status 255
	I0731 19:49:37.053853  139843 main.go:141] libmachine: (ha-235073-m03) DBG | output  : 
	I0731 19:49:40.054762  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Getting to WaitForSSH function...
	I0731 19:49:40.057459  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:40.057925  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:40.057962  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:40.058043  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Using SSH client type: external
	I0731 19:49:40.058082  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/id_rsa (-rw-------)
	I0731 19:49:40.058113  139843 main.go:141] libmachine: (ha-235073-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 19:49:40.058127  139843 main.go:141] libmachine: (ha-235073-m03) DBG | About to run SSH command:
	I0731 19:49:40.058142  139843 main.go:141] libmachine: (ha-235073-m03) DBG | exit 0
	I0731 19:49:40.189626  139843 main.go:141] libmachine: (ha-235073-m03) DBG | SSH cmd err, output: <nil>: 
	I0731 19:49:40.189871  139843 main.go:141] libmachine: (ha-235073-m03) KVM machine creation complete!
	I0731 19:49:40.190213  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetConfigRaw
	I0731 19:49:40.190809  139843 main.go:141] libmachine: (ha-235073-m03) Calling .DriverName
	I0731 19:49:40.191043  139843 main.go:141] libmachine: (ha-235073-m03) Calling .DriverName
	I0731 19:49:40.191214  139843 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 19:49:40.191230  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetState
	I0731 19:49:40.192502  139843 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 19:49:40.192516  139843 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 19:49:40.192522  139843 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 19:49:40.192528  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHHostname
	I0731 19:49:40.194981  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:40.195297  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:40.195323  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:40.195496  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHPort
	I0731 19:49:40.195691  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:49:40.195894  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:49:40.196034  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHUsername
	I0731 19:49:40.196246  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:49:40.196467  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0731 19:49:40.196478  139843 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 19:49:40.312666  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 19:49:40.312697  139843 main.go:141] libmachine: Detecting the provisioner...
	I0731 19:49:40.312706  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHHostname
	I0731 19:49:40.315500  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:40.315839  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:40.315867  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:40.315998  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHPort
	I0731 19:49:40.316188  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:49:40.316352  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:49:40.316503  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHUsername
	I0731 19:49:40.316683  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:49:40.316843  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0731 19:49:40.316854  139843 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 19:49:40.430110  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 19:49:40.430171  139843 main.go:141] libmachine: found compatible host: buildroot
	I0731 19:49:40.430179  139843 main.go:141] libmachine: Provisioning with buildroot...
	I0731 19:49:40.430187  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetMachineName
	I0731 19:49:40.430469  139843 buildroot.go:166] provisioning hostname "ha-235073-m03"
	I0731 19:49:40.430491  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetMachineName
	I0731 19:49:40.430689  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHHostname
	I0731 19:49:40.433312  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:40.433683  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:40.433703  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:40.433856  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHPort
	I0731 19:49:40.434054  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:49:40.434203  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:49:40.434329  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHUsername
	I0731 19:49:40.434530  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:49:40.434688  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0731 19:49:40.434700  139843 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-235073-m03 && echo "ha-235073-m03" | sudo tee /etc/hostname
	I0731 19:49:40.563706  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-235073-m03
	
	I0731 19:49:40.563740  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHHostname
	I0731 19:49:40.566368  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:40.566729  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:40.566757  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:40.566911  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHPort
	I0731 19:49:40.567109  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:49:40.567302  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:49:40.567507  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHUsername
	I0731 19:49:40.567664  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:49:40.567823  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0731 19:49:40.567839  139843 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-235073-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-235073-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-235073-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 19:49:40.691258  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 19:49:40.691293  139843 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 19:49:40.691314  139843 buildroot.go:174] setting up certificates
	I0731 19:49:40.691327  139843 provision.go:84] configureAuth start
	I0731 19:49:40.691340  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetMachineName
	I0731 19:49:40.691652  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetIP
	I0731 19:49:40.694219  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:40.694696  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:40.694719  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:40.694934  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHHostname
	I0731 19:49:40.696956  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:40.697357  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:40.697387  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:40.697517  139843 provision.go:143] copyHostCerts
	I0731 19:49:40.697556  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 19:49:40.697589  139843 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 19:49:40.697611  139843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 19:49:40.697683  139843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 19:49:40.697758  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 19:49:40.697776  139843 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 19:49:40.697783  139843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 19:49:40.697806  139843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 19:49:40.697848  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 19:49:40.697866  139843 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 19:49:40.697872  139843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 19:49:40.697894  139843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 19:49:40.697942  139843 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.ha-235073-m03 san=[127.0.0.1 192.168.39.136 ha-235073-m03 localhost minikube]
	I0731 19:49:40.934287  139843 provision.go:177] copyRemoteCerts
	I0731 19:49:40.934344  139843 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 19:49:40.934368  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHHostname
	I0731 19:49:40.937136  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:40.937484  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:40.937507  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:40.937746  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHPort
	I0731 19:49:40.937932  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:49:40.938104  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHUsername
	I0731 19:49:40.938260  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/id_rsa Username:docker}
	I0731 19:49:41.023742  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 19:49:41.023817  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 19:49:41.051389  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 19:49:41.051469  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 19:49:41.076706  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 19:49:41.076784  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 19:49:41.100557  139843 provision.go:87] duration metric: took 409.214806ms to configureAuth
	I0731 19:49:41.100590  139843 buildroot.go:189] setting minikube options for container-runtime
	I0731 19:49:41.100848  139843 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:49:41.100949  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHHostname
	I0731 19:49:41.103740  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:41.104105  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:41.104131  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:41.104338  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHPort
	I0731 19:49:41.104544  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:49:41.104728  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:49:41.104886  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHUsername
	I0731 19:49:41.105085  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:49:41.105301  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0731 19:49:41.105318  139843 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 19:49:41.394123  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 19:49:41.394157  139843 main.go:141] libmachine: Checking connection to Docker...
	I0731 19:49:41.394167  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetURL
	I0731 19:49:41.395400  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Using libvirt version 6000000
	I0731 19:49:41.397436  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:41.397766  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:41.397793  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:41.397919  139843 main.go:141] libmachine: Docker is up and running!
	I0731 19:49:41.397934  139843 main.go:141] libmachine: Reticulating splines...
	I0731 19:49:41.397942  139843 client.go:171] duration metric: took 28.682138125s to LocalClient.Create
	I0731 19:49:41.397970  139843 start.go:167] duration metric: took 28.682204129s to libmachine.API.Create "ha-235073"
	I0731 19:49:41.397982  139843 start.go:293] postStartSetup for "ha-235073-m03" (driver="kvm2")
	I0731 19:49:41.397997  139843 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 19:49:41.398018  139843 main.go:141] libmachine: (ha-235073-m03) Calling .DriverName
	I0731 19:49:41.398284  139843 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 19:49:41.398307  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHHostname
	I0731 19:49:41.400510  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:41.400846  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:41.400870  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:41.401032  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHPort
	I0731 19:49:41.401239  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:49:41.401457  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHUsername
	I0731 19:49:41.401624  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/id_rsa Username:docker}
	I0731 19:49:41.487941  139843 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 19:49:41.492747  139843 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 19:49:41.492774  139843 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 19:49:41.492831  139843 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 19:49:41.492907  139843 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 19:49:41.492921  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> /etc/ssl/certs/1288912.pem
	I0731 19:49:41.493032  139843 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 19:49:41.502859  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 19:49:41.527878  139843 start.go:296] duration metric: took 129.876972ms for postStartSetup
	I0731 19:49:41.527936  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetConfigRaw
	I0731 19:49:41.528505  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetIP
	I0731 19:49:41.531265  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:41.531659  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:41.531699  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:41.531979  139843 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/config.json ...
	I0731 19:49:41.532211  139843 start.go:128] duration metric: took 28.83549273s to createHost
	I0731 19:49:41.532235  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHHostname
	I0731 19:49:41.534681  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:41.535082  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:41.535106  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:41.535285  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHPort
	I0731 19:49:41.535487  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:49:41.535637  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:49:41.535836  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHUsername
	I0731 19:49:41.536031  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:49:41.536235  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0731 19:49:41.536247  139843 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 19:49:41.649889  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722455381.621855104
	
	I0731 19:49:41.649915  139843 fix.go:216] guest clock: 1722455381.621855104
	I0731 19:49:41.649924  139843 fix.go:229] Guest: 2024-07-31 19:49:41.621855104 +0000 UTC Remote: 2024-07-31 19:49:41.532223138 +0000 UTC m=+223.341153733 (delta=89.631966ms)
	I0731 19:49:41.649947  139843 fix.go:200] guest clock delta is within tolerance: 89.631966ms
	I0731 19:49:41.649954  139843 start.go:83] releasing machines lock for "ha-235073-m03", held for 28.95339132s
	I0731 19:49:41.649980  139843 main.go:141] libmachine: (ha-235073-m03) Calling .DriverName
	I0731 19:49:41.650238  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetIP
	I0731 19:49:41.653246  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:41.654123  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:41.654174  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:41.656370  139843 out.go:177] * Found network options:
	I0731 19:49:41.658139  139843 out.go:177]   - NO_PROXY=192.168.39.146,192.168.39.102
	W0731 19:49:41.659394  139843 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 19:49:41.659424  139843 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 19:49:41.659443  139843 main.go:141] libmachine: (ha-235073-m03) Calling .DriverName
	I0731 19:49:41.659994  139843 main.go:141] libmachine: (ha-235073-m03) Calling .DriverName
	I0731 19:49:41.660184  139843 main.go:141] libmachine: (ha-235073-m03) Calling .DriverName
	I0731 19:49:41.660288  139843 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 19:49:41.660329  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHHostname
	W0731 19:49:41.660399  139843 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 19:49:41.660435  139843 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 19:49:41.660503  139843 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 19:49:41.660526  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHHostname
	I0731 19:49:41.662875  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:41.663195  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:41.663354  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:41.663490  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:41.663531  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:41.663570  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHPort
	I0731 19:49:41.663576  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:41.663775  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:49:41.663784  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHPort
	I0731 19:49:41.663941  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:49:41.663957  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHUsername
	I0731 19:49:41.664140  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHUsername
	I0731 19:49:41.664145  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/id_rsa Username:docker}
	I0731 19:49:41.664289  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/id_rsa Username:docker}
	I0731 19:49:41.902689  139843 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 19:49:41.908774  139843 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 19:49:41.908851  139843 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 19:49:41.924353  139843 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 19:49:41.924374  139843 start.go:495] detecting cgroup driver to use...
	I0731 19:49:41.924438  139843 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 19:49:41.941590  139843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 19:49:41.956027  139843 docker.go:217] disabling cri-docker service (if available) ...
	I0731 19:49:41.956088  139843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 19:49:41.970176  139843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 19:49:41.983233  139843 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 19:49:42.102513  139843 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 19:49:42.256596  139843 docker.go:233] disabling docker service ...
	I0731 19:49:42.256663  139843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 19:49:42.271847  139843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 19:49:42.285469  139843 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 19:49:42.428666  139843 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 19:49:42.556537  139843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 19:49:42.571888  139843 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 19:49:42.590235  139843 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 19:49:42.590313  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:49:42.600932  139843 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 19:49:42.601004  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:49:42.613682  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:49:42.624498  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:49:42.634794  139843 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 19:49:42.645520  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:49:42.656329  139843 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:49:42.674523  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:49:42.684828  139843 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 19:49:42.695013  139843 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 19:49:42.695074  139843 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 19:49:42.709252  139843 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 19:49:42.719340  139843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:49:42.843340  139843 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 19:49:42.992388  139843 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 19:49:42.992468  139843 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 19:49:42.997721  139843 start.go:563] Will wait 60s for crictl version
	I0731 19:49:42.997774  139843 ssh_runner.go:195] Run: which crictl
	I0731 19:49:43.001818  139843 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 19:49:43.046559  139843 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 19:49:43.046674  139843 ssh_runner.go:195] Run: crio --version
	I0731 19:49:43.076903  139843 ssh_runner.go:195] Run: crio --version
	I0731 19:49:43.108474  139843 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 19:49:43.109991  139843 out.go:177]   - env NO_PROXY=192.168.39.146
	I0731 19:49:43.111425  139843 out.go:177]   - env NO_PROXY=192.168.39.146,192.168.39.102
	I0731 19:49:43.112762  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetIP
	I0731 19:49:43.115493  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:43.115896  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:43.115917  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:43.116125  139843 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 19:49:43.120571  139843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 19:49:43.133395  139843 mustload.go:65] Loading cluster: ha-235073
	I0731 19:49:43.133659  139843 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:49:43.134004  139843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:49:43.134055  139843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:49:43.148767  139843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35831
	I0731 19:49:43.149177  139843 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:49:43.149677  139843 main.go:141] libmachine: Using API Version  1
	I0731 19:49:43.149700  139843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:49:43.150026  139843 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:49:43.150262  139843 main.go:141] libmachine: (ha-235073) Calling .GetState
	I0731 19:49:43.151953  139843 host.go:66] Checking if "ha-235073" exists ...
	I0731 19:49:43.152410  139843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:49:43.152446  139843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:49:43.167592  139843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41113
	I0731 19:49:43.167997  139843 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:49:43.168492  139843 main.go:141] libmachine: Using API Version  1
	I0731 19:49:43.168514  139843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:49:43.168834  139843 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:49:43.169047  139843 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:49:43.169211  139843 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073 for IP: 192.168.39.136
	I0731 19:49:43.169232  139843 certs.go:194] generating shared ca certs ...
	I0731 19:49:43.169248  139843 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:49:43.169388  139843 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 19:49:43.169433  139843 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 19:49:43.169442  139843 certs.go:256] generating profile certs ...
	I0731 19:49:43.169508  139843 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/client.key
	I0731 19:49:43.169533  139843 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key.5f4bd5e8
	I0731 19:49:43.169548  139843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt.5f4bd5e8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.146 192.168.39.102 192.168.39.136 192.168.39.254]
	I0731 19:49:43.325937  139843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt.5f4bd5e8 ...
	I0731 19:49:43.325971  139843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt.5f4bd5e8: {Name:mk7c32c651a738beae3b332c901ba02ca2f38208 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:49:43.326171  139843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key.5f4bd5e8 ...
	I0731 19:49:43.326187  139843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key.5f4bd5e8: {Name:mk4c7eb40d841fadf32775c3ad6100bc7dcc5cbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:49:43.326289  139843 certs.go:381] copying /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt.5f4bd5e8 -> /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt
	I0731 19:49:43.326420  139843 certs.go:385] copying /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key.5f4bd5e8 -> /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key
	I0731 19:49:43.326542  139843 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.key
	I0731 19:49:43.326560  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 19:49:43.326572  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 19:49:43.326584  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 19:49:43.326594  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 19:49:43.326608  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 19:49:43.326619  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 19:49:43.326631  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 19:49:43.326642  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 19:49:43.326690  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 19:49:43.326718  139843 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 19:49:43.326726  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 19:49:43.326746  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 19:49:43.326767  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 19:49:43.326787  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 19:49:43.326822  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 19:49:43.326847  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:49:43.326860  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem -> /usr/share/ca-certificates/128891.pem
	I0731 19:49:43.326872  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> /usr/share/ca-certificates/1288912.pem
	I0731 19:49:43.326904  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:49:43.330104  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:49:43.330628  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:49:43.330653  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:49:43.330929  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:49:43.331122  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:49:43.331321  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:49:43.331454  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:49:43.401675  139843 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0731 19:49:43.407029  139843 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0731 19:49:43.418402  139843 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0731 19:49:43.422817  139843 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0731 19:49:43.434800  139843 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0731 19:49:43.438848  139843 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0731 19:49:43.450534  139843 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0731 19:49:43.454515  139843 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0731 19:49:43.464745  139843 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0731 19:49:43.468720  139843 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0731 19:49:43.479197  139843 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0731 19:49:43.483801  139843 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0731 19:49:43.494510  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 19:49:43.522237  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 19:49:43.547593  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 19:49:43.570838  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 19:49:43.593725  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0731 19:49:43.618028  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 19:49:43.644478  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 19:49:43.670965  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 19:49:43.694595  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 19:49:43.719254  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 19:49:43.743708  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 19:49:43.767864  139843 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0731 19:49:43.785116  139843 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0731 19:49:43.802067  139843 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0731 19:49:43.818557  139843 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0731 19:49:43.834520  139843 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0731 19:49:43.850999  139843 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0731 19:49:43.868592  139843 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0731 19:49:43.890832  139843 ssh_runner.go:195] Run: openssl version
	I0731 19:49:43.896712  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 19:49:43.908340  139843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:49:43.913343  139843 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:49:43.913397  139843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:49:43.919326  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 19:49:43.930633  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 19:49:43.941544  139843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 19:49:43.946289  139843 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 19:49:43.946340  139843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 19:49:43.952209  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 19:49:43.964522  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 19:49:43.976763  139843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 19:49:43.981404  139843 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 19:49:43.981463  139843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 19:49:43.987499  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 19:49:43.999552  139843 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 19:49:44.003896  139843 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 19:49:44.003953  139843 kubeadm.go:934] updating node {m03 192.168.39.136 8443 v1.30.3 crio true true} ...
	I0731 19:49:44.004047  139843 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-235073-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-235073 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 19:49:44.004075  139843 kube-vip.go:115] generating kube-vip config ...
	I0731 19:49:44.004113  139843 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 19:49:44.023252  139843 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 19:49:44.023326  139843 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 19:49:44.023390  139843 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 19:49:44.034652  139843 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0731 19:49:44.034696  139843 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0731 19:49:44.045552  139843 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0731 19:49:44.045578  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 19:49:44.045587  139843 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0731 19:49:44.045605  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 19:49:44.045655  139843 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 19:49:44.045674  139843 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 19:49:44.045674  139843 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0731 19:49:44.045732  139843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:49:44.056803  139843 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0731 19:49:44.056830  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0731 19:49:44.063719  139843 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0731 19:49:44.063727  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 19:49:44.063743  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0731 19:49:44.063829  139843 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 19:49:44.132861  139843 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0731 19:49:44.132901  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0731 19:49:44.924241  139843 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0731 19:49:44.933907  139843 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0731 19:49:44.951614  139843 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 19:49:44.968733  139843 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0731 19:49:44.986968  139843 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 19:49:44.991180  139843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 19:49:45.004475  139843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:49:45.134748  139843 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 19:49:45.153563  139843 host.go:66] Checking if "ha-235073" exists ...
	I0731 19:49:45.154025  139843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:49:45.154083  139843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:49:45.170990  139843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38713
	I0731 19:49:45.171459  139843 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:49:45.171997  139843 main.go:141] libmachine: Using API Version  1
	I0731 19:49:45.172022  139843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:49:45.172400  139843 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:49:45.172613  139843 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:49:45.172798  139843 start.go:317] joinCluster: &{Name:ha-235073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-235073 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.136 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:49:45.172947  139843 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0731 19:49:45.172982  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:49:45.176138  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:49:45.176649  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:49:45.176678  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:49:45.176836  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:49:45.177064  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:49:45.177229  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:49:45.177368  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:49:45.332057  139843 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.136 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 19:49:45.332111  139843 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b3yv26.dt4fe9zeda3apkfd --discovery-token-ca-cert-hash sha256:227a67c5218b331dd5f7132ea28d97c9a8049ca33dc9adbb2077964e102f3559 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-235073-m03 --control-plane --apiserver-advertise-address=192.168.39.136 --apiserver-bind-port=8443"
	I0731 19:50:07.954577  139843 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b3yv26.dt4fe9zeda3apkfd --discovery-token-ca-cert-hash sha256:227a67c5218b331dd5f7132ea28d97c9a8049ca33dc9adbb2077964e102f3559 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-235073-m03 --control-plane --apiserver-advertise-address=192.168.39.136 --apiserver-bind-port=8443": (22.622434486s)
	I0731 19:50:07.954620  139843 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0731 19:50:08.547853  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-235073-m03 minikube.k8s.io/updated_at=2024_07_31T19_50_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825 minikube.k8s.io/name=ha-235073 minikube.k8s.io/primary=false
	I0731 19:50:08.665157  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-235073-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0731 19:50:08.801101  139843 start.go:319] duration metric: took 23.628296732s to joinCluster
	I0731 19:50:08.801196  139843 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.136 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 19:50:08.801549  139843 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:50:08.802829  139843 out.go:177] * Verifying Kubernetes components...
	I0731 19:50:08.804572  139843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:50:09.129690  139843 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 19:50:09.174675  139843 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 19:50:09.175027  139843 kapi.go:59] client config for ha-235073: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/client.crt", KeyFile:"/home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/client.key", CAFile:"/home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0731 19:50:09.175126  139843 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.146:8443
	I0731 19:50:09.175432  139843 node_ready.go:35] waiting up to 6m0s for node "ha-235073-m03" to be "Ready" ...
	I0731 19:50:09.175577  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:09.175649  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:09.175665  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:09.175671  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:09.182609  139843 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 19:50:09.676526  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:09.676547  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:09.676556  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:09.676561  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:09.679851  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:10.175738  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:10.175768  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:10.175781  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:10.175787  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:10.179417  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:10.676526  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:10.676553  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:10.676567  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:10.676576  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:10.679880  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:11.175835  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:11.175858  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:11.175867  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:11.175871  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:11.179316  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:11.180521  139843 node_ready.go:53] node "ha-235073-m03" has status "Ready":"False"
	I0731 19:50:11.675834  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:11.675859  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:11.675872  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:11.675880  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:11.679567  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:12.176140  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:12.176172  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:12.176183  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:12.176187  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:12.179400  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:12.676399  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:12.676419  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:12.676428  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:12.676432  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:12.680072  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:13.175738  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:13.175758  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:13.175767  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:13.175772  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:13.178758  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:50:13.676409  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:13.676432  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:13.676443  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:13.676449  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:13.679978  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:13.680816  139843 node_ready.go:53] node "ha-235073-m03" has status "Ready":"False"
	I0731 19:50:14.176272  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:14.176301  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:14.176313  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:14.176320  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:14.179532  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:14.676106  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:14.676134  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:14.676147  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:14.676154  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:14.679525  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:15.176711  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:15.176740  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:15.176750  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:15.176756  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:15.180175  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:15.676603  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:15.676633  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:15.676645  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:15.676653  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:15.680746  139843 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 19:50:15.681392  139843 node_ready.go:53] node "ha-235073-m03" has status "Ready":"False"
	I0731 19:50:16.176393  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:16.176419  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:16.176429  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:16.176434  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:16.179564  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:16.675683  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:16.675708  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:16.675720  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:16.675725  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:16.679525  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:17.176218  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:17.176241  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:17.176252  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:17.176257  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:17.179403  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:17.676258  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:17.676279  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:17.676286  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:17.676291  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:17.680414  139843 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 19:50:17.681459  139843 node_ready.go:53] node "ha-235073-m03" has status "Ready":"False"
	I0731 19:50:18.175687  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:18.175705  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:18.175713  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:18.175716  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:18.179220  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:18.676264  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:18.676289  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:18.676300  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:18.676306  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:18.684784  139843 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0731 19:50:19.175730  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:19.175754  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:19.175763  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:19.175767  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:19.178808  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:19.675818  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:19.675840  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:19.675849  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:19.675853  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:19.678963  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:20.176029  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:20.176053  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:20.176065  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:20.176070  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:20.179637  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:20.180334  139843 node_ready.go:53] node "ha-235073-m03" has status "Ready":"False"
	I0731 19:50:20.676676  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:20.676698  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:20.676706  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:20.676712  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:20.679931  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:21.175886  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:21.175908  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:21.175917  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:21.175922  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:21.179155  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:21.676122  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:21.676146  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:21.676154  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:21.676160  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:21.679228  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:22.176509  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:22.176531  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:22.176539  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:22.176542  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:22.180204  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:22.181211  139843 node_ready.go:53] node "ha-235073-m03" has status "Ready":"False"
	I0731 19:50:22.676458  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:22.676483  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:22.676494  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:22.676502  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:22.679956  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:23.176171  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:23.176193  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:23.176204  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:23.176208  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:23.179717  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:23.676251  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:23.676274  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:23.676283  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:23.676287  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:23.680051  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:24.175976  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:24.175998  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:24.176007  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:24.176010  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:24.179354  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:24.676324  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:24.676353  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:24.676365  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:24.676372  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:24.679868  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:24.680464  139843 node_ready.go:53] node "ha-235073-m03" has status "Ready":"False"
	I0731 19:50:25.175744  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:25.175767  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:25.175777  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:25.175780  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:25.178789  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:50:25.675727  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:25.675751  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:25.675760  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:25.675763  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:25.679517  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:26.176630  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:26.176652  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:26.176662  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:26.176667  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:26.179563  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:50:26.676595  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:26.676618  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:26.676626  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:26.676630  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:26.679987  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:26.680562  139843 node_ready.go:53] node "ha-235073-m03" has status "Ready":"False"
	I0731 19:50:27.176291  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:27.176312  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:27.176321  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:27.176326  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:27.179392  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:27.676328  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:27.676355  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:27.676369  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:27.676373  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:27.680541  139843 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 19:50:28.176110  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:28.176136  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:28.176148  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:28.176155  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:28.183105  139843 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 19:50:28.675751  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:28.675776  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:28.675786  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:28.675793  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:28.679910  139843 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 19:50:28.680710  139843 node_ready.go:49] node "ha-235073-m03" has status "Ready":"True"
	I0731 19:50:28.680731  139843 node_ready.go:38] duration metric: took 19.505274208s for node "ha-235073-m03" to be "Ready" ...
	I0731 19:50:28.680741  139843 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 19:50:28.680804  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods
	I0731 19:50:28.680814  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:28.680821  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:28.680824  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:28.688302  139843 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 19:50:28.694640  139843 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-d2w7q" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:28.694732  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-d2w7q
	I0731 19:50:28.694741  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:28.694748  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:28.694754  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:28.697603  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:50:28.698320  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:50:28.698336  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:28.698346  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:28.698351  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:28.700994  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:50:28.701426  139843 pod_ready.go:92] pod "coredns-7db6d8ff4d-d2w7q" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:28.701451  139843 pod_ready.go:81] duration metric: took 6.786511ms for pod "coredns-7db6d8ff4d-d2w7q" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:28.701462  139843 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-f7dzt" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:28.701522  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-f7dzt
	I0731 19:50:28.701534  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:28.701544  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:28.701554  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:28.704325  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:50:28.704893  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:50:28.704908  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:28.704916  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:28.704921  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:28.707262  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:50:28.707957  139843 pod_ready.go:92] pod "coredns-7db6d8ff4d-f7dzt" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:28.707972  139843 pod_ready.go:81] duration metric: took 6.504881ms for pod "coredns-7db6d8ff4d-f7dzt" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:28.707980  139843 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-235073" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:28.708026  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/etcd-ha-235073
	I0731 19:50:28.708033  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:28.708040  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:28.708044  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:28.710485  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:50:28.710939  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:50:28.710950  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:28.710957  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:28.710962  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:28.713442  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:50:28.714244  139843 pod_ready.go:92] pod "etcd-ha-235073" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:28.714261  139843 pod_ready.go:81] duration metric: took 6.276497ms for pod "etcd-ha-235073" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:28.714269  139843 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-235073-m02" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:28.714315  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/etcd-ha-235073-m02
	I0731 19:50:28.714322  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:28.714329  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:28.714334  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:28.717791  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:28.719025  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:50:28.719044  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:28.719054  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:28.719059  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:28.722461  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:28.723383  139843 pod_ready.go:92] pod "etcd-ha-235073-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:28.723402  139843 pod_ready.go:81] duration metric: took 9.124917ms for pod "etcd-ha-235073-m02" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:28.723411  139843 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-235073-m03" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:28.876794  139843 request.go:629] Waited for 153.326643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/etcd-ha-235073-m03
	I0731 19:50:28.876871  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/etcd-ha-235073-m03
	I0731 19:50:28.876877  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:28.876888  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:28.876893  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:28.880077  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:29.076081  139843 request.go:629] Waited for 195.375862ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:29.076137  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:29.076153  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:29.076179  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:29.076189  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:29.079347  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:29.080187  139843 pod_ready.go:92] pod "etcd-ha-235073-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:29.080205  139843 pod_ready.go:81] duration metric: took 356.788799ms for pod "etcd-ha-235073-m03" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:29.080220  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-235073" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:29.276539  139843 request.go:629] Waited for 196.232295ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-235073
	I0731 19:50:29.276635  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-235073
	I0731 19:50:29.276648  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:29.276658  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:29.276663  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:29.279859  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:29.475793  139843 request.go:629] Waited for 195.294387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:50:29.475866  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:50:29.475874  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:29.475886  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:29.475896  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:29.479417  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:29.480298  139843 pod_ready.go:92] pod "kube-apiserver-ha-235073" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:29.480322  139843 pod_ready.go:81] duration metric: took 400.093165ms for pod "kube-apiserver-ha-235073" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:29.480335  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-235073-m02" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:29.676393  139843 request.go:629] Waited for 195.981834ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-235073-m02
	I0731 19:50:29.676467  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-235073-m02
	I0731 19:50:29.676472  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:29.676516  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:29.676523  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:29.679960  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:29.876116  139843 request.go:629] Waited for 195.341402ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:50:29.876199  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:50:29.876207  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:29.876217  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:29.876227  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:29.879280  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:29.879879  139843 pod_ready.go:92] pod "kube-apiserver-ha-235073-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:29.879899  139843 pod_ready.go:81] duration metric: took 399.557128ms for pod "kube-apiserver-ha-235073-m02" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:29.879908  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-235073-m03" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:30.075887  139843 request.go:629] Waited for 195.873775ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-235073-m03
	I0731 19:50:30.075967  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-235073-m03
	I0731 19:50:30.075976  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:30.075986  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:30.075994  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:30.079799  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:30.276753  139843 request.go:629] Waited for 196.234848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:30.276839  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:30.276847  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:30.276854  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:30.276862  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:30.280427  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:30.281150  139843 pod_ready.go:92] pod "kube-apiserver-ha-235073-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:30.281165  139843 pod_ready.go:81] duration metric: took 401.250556ms for pod "kube-apiserver-ha-235073-m03" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:30.281174  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-235073" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:30.476295  139843 request.go:629] Waited for 195.027881ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-235073
	I0731 19:50:30.476359  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-235073
	I0731 19:50:30.476364  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:30.476375  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:30.476379  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:30.479677  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:30.675822  139843 request.go:629] Waited for 195.286553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:50:30.675913  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:50:30.675925  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:30.675936  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:30.675942  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:30.679354  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:30.680187  139843 pod_ready.go:92] pod "kube-controller-manager-ha-235073" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:30.680205  139843 pod_ready.go:81] duration metric: took 399.024732ms for pod "kube-controller-manager-ha-235073" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:30.680214  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-235073-m02" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:30.876277  139843 request.go:629] Waited for 195.979424ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-235073-m02
	I0731 19:50:30.876338  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-235073-m02
	I0731 19:50:30.876343  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:30.876351  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:30.876356  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:30.880038  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:31.076782  139843 request.go:629] Waited for 196.129605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:50:31.076867  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:50:31.076877  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:31.076885  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:31.076890  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:31.080255  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:31.081130  139843 pod_ready.go:92] pod "kube-controller-manager-ha-235073-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:31.081152  139843 pod_ready.go:81] duration metric: took 400.931545ms for pod "kube-controller-manager-ha-235073-m02" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:31.081163  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-235073-m03" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:31.276307  139843 request.go:629] Waited for 195.065568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-235073-m03
	I0731 19:50:31.276381  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-235073-m03
	I0731 19:50:31.276388  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:31.276396  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:31.276402  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:31.280048  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:31.476354  139843 request.go:629] Waited for 195.34089ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:31.476420  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:31.476424  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:31.476432  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:31.476436  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:31.480461  139843 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 19:50:31.481095  139843 pod_ready.go:92] pod "kube-controller-manager-ha-235073-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:31.481121  139843 pod_ready.go:81] duration metric: took 399.950752ms for pod "kube-controller-manager-ha-235073-m03" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:31.481136  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4g5ws" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:31.676296  139843 request.go:629] Waited for 195.076088ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4g5ws
	I0731 19:50:31.676360  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4g5ws
	I0731 19:50:31.676365  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:31.676377  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:31.676383  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:31.680297  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:31.876450  139843 request.go:629] Waited for 195.374626ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:50:31.876542  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:50:31.876553  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:31.876564  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:31.876571  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:31.880441  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:31.880893  139843 pod_ready.go:92] pod "kube-proxy-4g5ws" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:31.880912  139843 pod_ready.go:81] duration metric: took 399.768167ms for pod "kube-proxy-4g5ws" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:31.880925  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mkrmt" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:32.076237  139843 request.go:629] Waited for 195.239726ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mkrmt
	I0731 19:50:32.076336  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mkrmt
	I0731 19:50:32.076345  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:32.076356  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:32.076366  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:32.080695  139843 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 19:50:32.276772  139843 request.go:629] Waited for 195.403494ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:32.276829  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:32.276834  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:32.276842  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:32.276845  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:32.281369  139843 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 19:50:32.282475  139843 pod_ready.go:92] pod "kube-proxy-mkrmt" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:32.282493  139843 pod_ready.go:81] duration metric: took 401.561302ms for pod "kube-proxy-mkrmt" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:32.282502  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-td8j2" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:32.476553  139843 request.go:629] Waited for 193.98316ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-proxy-td8j2
	I0731 19:50:32.476637  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-proxy-td8j2
	I0731 19:50:32.476642  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:32.476650  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:32.476654  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:32.482107  139843 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 19:50:32.676368  139843 request.go:629] Waited for 193.352065ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:50:32.676476  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:50:32.676488  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:32.676498  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:32.676506  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:32.679741  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:32.680200  139843 pod_ready.go:92] pod "kube-proxy-td8j2" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:32.680219  139843 pod_ready.go:81] duration metric: took 397.710991ms for pod "kube-proxy-td8j2" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:32.680228  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-235073" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:32.876058  139843 request.go:629] Waited for 195.737513ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-235073
	I0731 19:50:32.876124  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-235073
	I0731 19:50:32.876132  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:32.876144  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:32.876152  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:32.879409  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:33.076543  139843 request.go:629] Waited for 196.353473ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:50:33.076601  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:50:33.076607  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:33.076614  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:33.076625  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:33.080311  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:33.080994  139843 pod_ready.go:92] pod "kube-scheduler-ha-235073" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:33.081018  139843 pod_ready.go:81] duration metric: took 400.780591ms for pod "kube-scheduler-ha-235073" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:33.081031  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-235073-m02" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:33.275821  139843 request.go:629] Waited for 194.695647ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-235073-m02
	I0731 19:50:33.275891  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-235073-m02
	I0731 19:50:33.275896  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:33.275903  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:33.275908  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:33.279206  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:33.476256  139843 request.go:629] Waited for 196.416798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:50:33.476332  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:50:33.476339  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:33.476353  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:33.476360  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:33.479584  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:33.480216  139843 pod_ready.go:92] pod "kube-scheduler-ha-235073-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:33.480274  139843 pod_ready.go:81] duration metric: took 399.20017ms for pod "kube-scheduler-ha-235073-m02" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:33.480294  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-235073-m03" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:33.676336  139843 request.go:629] Waited for 195.96373ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-235073-m03
	I0731 19:50:33.676439  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-235073-m03
	I0731 19:50:33.676456  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:33.676469  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:33.676481  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:33.680201  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:33.876453  139843 request.go:629] Waited for 195.361852ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:33.876532  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:33.876537  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:33.876545  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:33.876552  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:33.880221  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:33.880823  139843 pod_ready.go:92] pod "kube-scheduler-ha-235073-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:33.880842  139843 pod_ready.go:81] duration metric: took 400.540427ms for pod "kube-scheduler-ha-235073-m03" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:33.880854  139843 pod_ready.go:38] duration metric: took 5.200102871s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 19:50:33.880869  139843 api_server.go:52] waiting for apiserver process to appear ...
	I0731 19:50:33.880918  139843 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:50:33.895980  139843 api_server.go:72] duration metric: took 25.094738931s to wait for apiserver process to appear ...
	I0731 19:50:33.896009  139843 api_server.go:88] waiting for apiserver healthz status ...
	I0731 19:50:33.896033  139843 api_server.go:253] Checking apiserver healthz at https://192.168.39.146:8443/healthz ...
	I0731 19:50:33.901322  139843 api_server.go:279] https://192.168.39.146:8443/healthz returned 200:
	ok
	I0731 19:50:33.901410  139843 round_trippers.go:463] GET https://192.168.39.146:8443/version
	I0731 19:50:33.901419  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:33.901436  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:33.901442  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:33.902346  139843 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0731 19:50:33.902424  139843 api_server.go:141] control plane version: v1.30.3
	I0731 19:50:33.902439  139843 api_server.go:131] duration metric: took 6.423299ms to wait for apiserver health ...
	I0731 19:50:33.902448  139843 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 19:50:34.075781  139843 request.go:629] Waited for 173.262693ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods
	I0731 19:50:34.075861  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods
	I0731 19:50:34.075867  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:34.075877  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:34.075883  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:34.083018  139843 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 19:50:34.089685  139843 system_pods.go:59] 24 kube-system pods found
	I0731 19:50:34.089714  139843 system_pods.go:61] "coredns-7db6d8ff4d-d2w7q" [c47597b4-a38b-438c-9c3b-8f7f45130f75] Running
	I0731 19:50:34.089719  139843 system_pods.go:61] "coredns-7db6d8ff4d-f7dzt" [9549b5d7-bb23-4934-883b-dd07f8d864d8] Running
	I0731 19:50:34.089722  139843 system_pods.go:61] "etcd-ha-235073" [ef927139-ead6-413d-b0cd-beb931fc4700] Running
	I0731 19:50:34.089725  139843 system_pods.go:61] "etcd-ha-235073-m02" [2bc3b6c8-c8de-42c0-a752-302d07433ebc] Running
	I0731 19:50:34.089728  139843 system_pods.go:61] "etcd-ha-235073-m03" [b78ae13d-78b3-4250-8b6b-dc3a2bd24b53] Running
	I0731 19:50:34.089731  139843 system_pods.go:61] "kindnet-6mpsn" [1910ba34-e9d4-4fb4-9f2b-6b00dad3a3ef] Running
	I0731 19:50:34.089734  139843 system_pods.go:61] "kindnet-964d5" [c663aa92-d78d-4d55-a7e8-29bd0d67e7b6] Running
	I0731 19:50:34.089737  139843 system_pods.go:61] "kindnet-v5g92" [c8020666-5376-4bdf-a9a3-d10b67fc04a9] Running
	I0731 19:50:34.089740  139843 system_pods.go:61] "kube-apiserver-ha-235073" [c7da5168-cd07-4660-91a7-f25bf44db28e] Running
	I0731 19:50:34.089745  139843 system_pods.go:61] "kube-apiserver-ha-235073-m02" [bb498dc0-7bea-4f44-b6ea-0b66122d8205] Running
	I0731 19:50:34.089750  139843 system_pods.go:61] "kube-apiserver-ha-235073-m03" [6880f463-4838-414e-8387-7ee8c8b9f84b] Running
	I0731 19:50:34.089753  139843 system_pods.go:61] "kube-controller-manager-ha-235073" [1d7ad140-888f-4863-aa09-0651eae569a7] Running
	I0731 19:50:34.089759  139843 system_pods.go:61] "kube-controller-manager-ha-235073-m02" [7d1e23f4-1609-476f-b30e-1e18d291ca4c] Running
	I0731 19:50:34.089762  139843 system_pods.go:61] "kube-controller-manager-ha-235073-m03" [a6078f70-cd3b-48f2-a9a3-982f9d4bd67d] Running
	I0731 19:50:34.089765  139843 system_pods.go:61] "kube-proxy-4g5ws" [681015ee-d7ba-460f-a593-0152df2b065d] Running
	I0731 19:50:34.089768  139843 system_pods.go:61] "kube-proxy-mkrmt" [5f001ea6-7c3b-4edc-8f66-b107a3c0d570] Running
	I0731 19:50:34.089771  139843 system_pods.go:61] "kube-proxy-td8j2" [b836edfa-4df1-40e4-a58a-3f23afd5b78b] Running
	I0731 19:50:34.089774  139843 system_pods.go:61] "kube-scheduler-ha-235073" [597d51e9-b674-4b7f-b104-6e8808a5d593] Running
	I0731 19:50:34.089777  139843 system_pods.go:61] "kube-scheduler-ha-235073-m02" [84f686e7-4317-41b4-8064-621a7fa7ade8] Running
	I0731 19:50:34.089780  139843 system_pods.go:61] "kube-scheduler-ha-235073-m03" [ce77b19b-2862-41e5-9006-8d6667b563b8] Running
	I0731 19:50:34.089782  139843 system_pods.go:61] "kube-vip-ha-235073" [f28e113e-7c11-4a00-a8cb-fb5527042343] Running
	I0731 19:50:34.089785  139843 system_pods.go:61] "kube-vip-ha-235073-m02" [4f387765-627c-49e4-9fce-eae672099a6d] Running
	I0731 19:50:34.089788  139843 system_pods.go:61] "kube-vip-ha-235073-m03" [abd1a06b-679a-4dc7-87bf-6aa534e6f031] Running
	I0731 19:50:34.089791  139843 system_pods.go:61] "storage-provisioner" [9cd9bb70-badc-4b4b-a135-62644edac7dd] Running
	I0731 19:50:34.089797  139843 system_pods.go:74] duration metric: took 187.341454ms to wait for pod list to return data ...
	I0731 19:50:34.089806  139843 default_sa.go:34] waiting for default service account to be created ...
	I0731 19:50:34.276220  139843 request.go:629] Waited for 186.333016ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/default/serviceaccounts
	I0731 19:50:34.276288  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/default/serviceaccounts
	I0731 19:50:34.276296  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:34.276305  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:34.276311  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:34.279550  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:34.279710  139843 default_sa.go:45] found service account: "default"
	I0731 19:50:34.279732  139843 default_sa.go:55] duration metric: took 189.917872ms for default service account to be created ...
	I0731 19:50:34.279742  139843 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 19:50:34.476177  139843 request.go:629] Waited for 196.355043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods
	I0731 19:50:34.476267  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods
	I0731 19:50:34.476274  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:34.476286  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:34.476295  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:34.483165  139843 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 19:50:34.491054  139843 system_pods.go:86] 24 kube-system pods found
	I0731 19:50:34.491083  139843 system_pods.go:89] "coredns-7db6d8ff4d-d2w7q" [c47597b4-a38b-438c-9c3b-8f7f45130f75] Running
	I0731 19:50:34.491088  139843 system_pods.go:89] "coredns-7db6d8ff4d-f7dzt" [9549b5d7-bb23-4934-883b-dd07f8d864d8] Running
	I0731 19:50:34.491093  139843 system_pods.go:89] "etcd-ha-235073" [ef927139-ead6-413d-b0cd-beb931fc4700] Running
	I0731 19:50:34.491097  139843 system_pods.go:89] "etcd-ha-235073-m02" [2bc3b6c8-c8de-42c0-a752-302d07433ebc] Running
	I0731 19:50:34.491101  139843 system_pods.go:89] "etcd-ha-235073-m03" [b78ae13d-78b3-4250-8b6b-dc3a2bd24b53] Running
	I0731 19:50:34.491104  139843 system_pods.go:89] "kindnet-6mpsn" [1910ba34-e9d4-4fb4-9f2b-6b00dad3a3ef] Running
	I0731 19:50:34.491108  139843 system_pods.go:89] "kindnet-964d5" [c663aa92-d78d-4d55-a7e8-29bd0d67e7b6] Running
	I0731 19:50:34.491112  139843 system_pods.go:89] "kindnet-v5g92" [c8020666-5376-4bdf-a9a3-d10b67fc04a9] Running
	I0731 19:50:34.491115  139843 system_pods.go:89] "kube-apiserver-ha-235073" [c7da5168-cd07-4660-91a7-f25bf44db28e] Running
	I0731 19:50:34.491119  139843 system_pods.go:89] "kube-apiserver-ha-235073-m02" [bb498dc0-7bea-4f44-b6ea-0b66122d8205] Running
	I0731 19:50:34.491123  139843 system_pods.go:89] "kube-apiserver-ha-235073-m03" [6880f463-4838-414e-8387-7ee8c8b9f84b] Running
	I0731 19:50:34.491127  139843 system_pods.go:89] "kube-controller-manager-ha-235073" [1d7ad140-888f-4863-aa09-0651eae569a7] Running
	I0731 19:50:34.491131  139843 system_pods.go:89] "kube-controller-manager-ha-235073-m02" [7d1e23f4-1609-476f-b30e-1e18d291ca4c] Running
	I0731 19:50:34.491137  139843 system_pods.go:89] "kube-controller-manager-ha-235073-m03" [a6078f70-cd3b-48f2-a9a3-982f9d4bd67d] Running
	I0731 19:50:34.491141  139843 system_pods.go:89] "kube-proxy-4g5ws" [681015ee-d7ba-460f-a593-0152df2b065d] Running
	I0731 19:50:34.491145  139843 system_pods.go:89] "kube-proxy-mkrmt" [5f001ea6-7c3b-4edc-8f66-b107a3c0d570] Running
	I0731 19:50:34.491148  139843 system_pods.go:89] "kube-proxy-td8j2" [b836edfa-4df1-40e4-a58a-3f23afd5b78b] Running
	I0731 19:50:34.491152  139843 system_pods.go:89] "kube-scheduler-ha-235073" [597d51e9-b674-4b7f-b104-6e8808a5d593] Running
	I0731 19:50:34.491156  139843 system_pods.go:89] "kube-scheduler-ha-235073-m02" [84f686e7-4317-41b4-8064-621a7fa7ade8] Running
	I0731 19:50:34.491162  139843 system_pods.go:89] "kube-scheduler-ha-235073-m03" [ce77b19b-2862-41e5-9006-8d6667b563b8] Running
	I0731 19:50:34.491166  139843 system_pods.go:89] "kube-vip-ha-235073" [f28e113e-7c11-4a00-a8cb-fb5527042343] Running
	I0731 19:50:34.491170  139843 system_pods.go:89] "kube-vip-ha-235073-m02" [4f387765-627c-49e4-9fce-eae672099a6d] Running
	I0731 19:50:34.491176  139843 system_pods.go:89] "kube-vip-ha-235073-m03" [abd1a06b-679a-4dc7-87bf-6aa534e6f031] Running
	I0731 19:50:34.491180  139843 system_pods.go:89] "storage-provisioner" [9cd9bb70-badc-4b4b-a135-62644edac7dd] Running
	I0731 19:50:34.491187  139843 system_pods.go:126] duration metric: took 211.436551ms to wait for k8s-apps to be running ...
	I0731 19:50:34.491198  139843 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 19:50:34.491244  139843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:50:34.507738  139843 system_svc.go:56] duration metric: took 16.527395ms WaitForService to wait for kubelet
	I0731 19:50:34.507770  139843 kubeadm.go:582] duration metric: took 25.706532708s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 19:50:34.507788  139843 node_conditions.go:102] verifying NodePressure condition ...
	I0731 19:50:34.676176  139843 request.go:629] Waited for 168.292834ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes
	I0731 19:50:34.676232  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes
	I0731 19:50:34.676237  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:34.676244  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:34.676248  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:34.679777  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:34.680927  139843 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 19:50:34.680947  139843 node_conditions.go:123] node cpu capacity is 2
	I0731 19:50:34.680959  139843 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 19:50:34.680966  139843 node_conditions.go:123] node cpu capacity is 2
	I0731 19:50:34.680972  139843 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 19:50:34.680979  139843 node_conditions.go:123] node cpu capacity is 2
	I0731 19:50:34.680987  139843 node_conditions.go:105] duration metric: took 173.19318ms to run NodePressure ...
	I0731 19:50:34.681007  139843 start.go:241] waiting for startup goroutines ...
	I0731 19:50:34.681030  139843 start.go:255] writing updated cluster config ...
	I0731 19:50:34.681371  139843 ssh_runner.go:195] Run: rm -f paused
	I0731 19:50:34.732057  139843 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 19:50:34.734140  139843 out.go:177] * Done! kubectl is now configured to use "ha-235073" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 19:54:12 ha-235073 crio[680]: time="2024-07-31 19:54:12.567361022Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2b85cdaa-4b23-44d9-814f-55ea18515151 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:54:12 ha-235073 crio[680]: time="2024-07-31 19:54:12.568305777Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4ef843b7-28da-4c1a-a777-9dde92fc0016 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:54:12 ha-235073 crio[680]: time="2024-07-31 19:54:12.568745565Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722455652568724957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4ef843b7-28da-4c1a-a777-9dde92fc0016 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:54:12 ha-235073 crio[680]: time="2024-07-31 19:54:12.569471236Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67e5659a-1751-4036-ad86-010f7d58d36f name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:54:12 ha-235073 crio[680]: time="2024-07-31 19:54:12.569545503Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67e5659a-1751-4036-ad86-010f7d58d36f name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:54:12 ha-235073 crio[680]: time="2024-07-31 19:54:12.569809055Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:36d67125ccdbad5f98a9142c81bc6585651ec4059eed554dfbe1f5cb5be99c60,PodSandboxId:6c4d1efc4989e0b2aa28c3cbda2d3f5d4b5e9252f3d7895297f54307dc7ab9f6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722455438711049436,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-g9vds,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1b34d06-e944-4236-afe0-1ee06ba4e666,},Annotations:map[string]string{io.kubernetes.container.hash: 212b9e5e,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ddbd3f3cc5f71da35c6165c799a35b8d72e224de7a0c4687a173f4de880f22,PodSandboxId:55ec4971c2e64ac2d6f9784d622423e7577ab445afbc4f722a284ada62c68dc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722455228102873852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2w7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47597b4-a38b-438c-9c3b-8f7f45130f75,},Annotations:map[string]string{io.kubernetes.container.hash: b29b8cc6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eba82db411e0f901dff59f98c9e5ae0d5213285233844742c5879ce5b6232f35,PodSandboxId:714a1d887a6e7a6aa0abbfaae3c16b878224596f43f32beb43f080809e9ffd58,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722455228083526798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 9cd9bb70-badc-4b4b-a135-62644edac7dd,},Annotations:map[string]string{io.kubernetes.container.hash: b5b39576,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30540ee956135e961a2eeabdc4f234f18455f75bb21b66afeef232ca2805dd90,PodSandboxId:231aebfc0631bd83287e33a69afbca01d1373f4b939d8bf65f4ac794aaf52012,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722455228031037861,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f7dzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9549b5d7-bb
23-4934-883b-dd07f8d864d8,},Annotations:map[string]string{io.kubernetes.container.hash: b9bec004,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee50c4b9e239496652640464181ee3dd4a9d21d5a5123d1d17fbb7a50e29dc1a,PodSandboxId:feeccc2a1a3e7406eaea5ea90a171a8661853a60ee426f70434f26aac0f00112,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722455215945081182,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6mpsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1910ba34-e9d4-4fb4-9f2b-6b00dad3a3ef,},Annotations:map[string]string{io.kubernetes.container.hash: 390f5316,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8811952c6253860c880f1b6a403fc858c3e6399e99e9a825cf0b05f791ebf3ac,PodSandboxId:dbf6b114c5cb5c19782cdc6e76a5915ae8e26cd82af58e80687fa0f5d3e199fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172245521
1859729190,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-td8j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b836edfa-4df1-40e4-a58a-3f23afd5b78b,},Annotations:map[string]string{io.kubernetes.container.hash: 33a934d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c31d2ba10cadb13f4b888c49e2a6934e94344684dfc2adf6833c2d1dc0993929,PodSandboxId:1174f1364f26d10dc051aa73fa255a606ad9bf503fcd115b3a9cbc5ca9742116,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17224551953
39861966,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b9131be600867c5ba2b1d2ffd206e40,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d642debf242fb07cc792132a55eddbb1c15f26311a8d754a5d8fe06c8b598ae,PodSandboxId:58bfb1289eb0474e39bf94aa0b78076816c68c357787822692fe3617a82226cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722455191497802122,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b69e7963c2d0df8833554e4876687c49,},Annotations:map[string]string{io.kubernetes.container.hash: d16169dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216984c6b7d59f01f104316e0505bcc5baf47b8d8b5b230f6641c7dd73533498,PodSandboxId:c9f1bb2690babd5d10c24de1404c57fafca2597594caac9dd0c659d72a8b552d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722455191481530968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adddf646550f2cb39fef0b7f6c02c656,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0877f308475d05ee771157aab5de9f3da07eec38a21c9a74d76bde2eb4de77,PodSandboxId:d2fb34888cbe775dce80bba1d1d7d8b4559159e4e1a7e8694d7d5e67f5d58e2f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722455191397982754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: ku
be-apiserver-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a308afa2b137aad975d5e22dcabd17,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6ae1a1aafd356067a53de9e770b37736ea4c621cb6bf29821cca1c4488aa31e,PodSandboxId:13ce57fab67b3276bebda32167ce6dffb6760a77b9289da77056562f62051eda,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722455191372229593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f5277261cc3e0ac6eb43af812478f1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=67e5659a-1751-4036-ad86-010f7d58d36f name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:54:12 ha-235073 crio[680]: time="2024-07-31 19:54:12.607874479Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b1546655-52b7-4adb-9f32-2393de58ce0e name=/runtime.v1.RuntimeService/Version
	Jul 31 19:54:12 ha-235073 crio[680]: time="2024-07-31 19:54:12.607945877Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b1546655-52b7-4adb-9f32-2393de58ce0e name=/runtime.v1.RuntimeService/Version
	Jul 31 19:54:12 ha-235073 crio[680]: time="2024-07-31 19:54:12.609612914Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e6f58218-08bb-4d9e-aef4-6ba6fc439007 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:54:12 ha-235073 crio[680]: time="2024-07-31 19:54:12.610067256Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722455652610044247,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e6f58218-08bb-4d9e-aef4-6ba6fc439007 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:54:12 ha-235073 crio[680]: time="2024-07-31 19:54:12.610739578Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b986137d-415b-4f01-82af-0eedaac9519b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:54:12 ha-235073 crio[680]: time="2024-07-31 19:54:12.610812372Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b986137d-415b-4f01-82af-0eedaac9519b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:54:12 ha-235073 crio[680]: time="2024-07-31 19:54:12.611083821Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:36d67125ccdbad5f98a9142c81bc6585651ec4059eed554dfbe1f5cb5be99c60,PodSandboxId:6c4d1efc4989e0b2aa28c3cbda2d3f5d4b5e9252f3d7895297f54307dc7ab9f6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722455438711049436,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-g9vds,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1b34d06-e944-4236-afe0-1ee06ba4e666,},Annotations:map[string]string{io.kubernetes.container.hash: 212b9e5e,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ddbd3f3cc5f71da35c6165c799a35b8d72e224de7a0c4687a173f4de880f22,PodSandboxId:55ec4971c2e64ac2d6f9784d622423e7577ab445afbc4f722a284ada62c68dc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722455228102873852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2w7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47597b4-a38b-438c-9c3b-8f7f45130f75,},Annotations:map[string]string{io.kubernetes.container.hash: b29b8cc6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eba82db411e0f901dff59f98c9e5ae0d5213285233844742c5879ce5b6232f35,PodSandboxId:714a1d887a6e7a6aa0abbfaae3c16b878224596f43f32beb43f080809e9ffd58,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722455228083526798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 9cd9bb70-badc-4b4b-a135-62644edac7dd,},Annotations:map[string]string{io.kubernetes.container.hash: b5b39576,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30540ee956135e961a2eeabdc4f234f18455f75bb21b66afeef232ca2805dd90,PodSandboxId:231aebfc0631bd83287e33a69afbca01d1373f4b939d8bf65f4ac794aaf52012,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722455228031037861,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f7dzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9549b5d7-bb
23-4934-883b-dd07f8d864d8,},Annotations:map[string]string{io.kubernetes.container.hash: b9bec004,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee50c4b9e239496652640464181ee3dd4a9d21d5a5123d1d17fbb7a50e29dc1a,PodSandboxId:feeccc2a1a3e7406eaea5ea90a171a8661853a60ee426f70434f26aac0f00112,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722455215945081182,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6mpsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1910ba34-e9d4-4fb4-9f2b-6b00dad3a3ef,},Annotations:map[string]string{io.kubernetes.container.hash: 390f5316,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8811952c6253860c880f1b6a403fc858c3e6399e99e9a825cf0b05f791ebf3ac,PodSandboxId:dbf6b114c5cb5c19782cdc6e76a5915ae8e26cd82af58e80687fa0f5d3e199fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172245521
1859729190,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-td8j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b836edfa-4df1-40e4-a58a-3f23afd5b78b,},Annotations:map[string]string{io.kubernetes.container.hash: 33a934d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c31d2ba10cadb13f4b888c49e2a6934e94344684dfc2adf6833c2d1dc0993929,PodSandboxId:1174f1364f26d10dc051aa73fa255a606ad9bf503fcd115b3a9cbc5ca9742116,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17224551953
39861966,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b9131be600867c5ba2b1d2ffd206e40,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d642debf242fb07cc792132a55eddbb1c15f26311a8d754a5d8fe06c8b598ae,PodSandboxId:58bfb1289eb0474e39bf94aa0b78076816c68c357787822692fe3617a82226cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722455191497802122,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b69e7963c2d0df8833554e4876687c49,},Annotations:map[string]string{io.kubernetes.container.hash: d16169dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216984c6b7d59f01f104316e0505bcc5baf47b8d8b5b230f6641c7dd73533498,PodSandboxId:c9f1bb2690babd5d10c24de1404c57fafca2597594caac9dd0c659d72a8b552d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722455191481530968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adddf646550f2cb39fef0b7f6c02c656,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0877f308475d05ee771157aab5de9f3da07eec38a21c9a74d76bde2eb4de77,PodSandboxId:d2fb34888cbe775dce80bba1d1d7d8b4559159e4e1a7e8694d7d5e67f5d58e2f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722455191397982754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: ku
be-apiserver-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a308afa2b137aad975d5e22dcabd17,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6ae1a1aafd356067a53de9e770b37736ea4c621cb6bf29821cca1c4488aa31e,PodSandboxId:13ce57fab67b3276bebda32167ce6dffb6760a77b9289da77056562f62051eda,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722455191372229593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f5277261cc3e0ac6eb43af812478f1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b986137d-415b-4f01-82af-0eedaac9519b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:54:12 ha-235073 crio[680]: time="2024-07-31 19:54:12.652295976Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f5a4b7fc-ed6c-45f0-afb8-3c4a6b8b9104 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:54:12 ha-235073 crio[680]: time="2024-07-31 19:54:12.652383687Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f5a4b7fc-ed6c-45f0-afb8-3c4a6b8b9104 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:54:12 ha-235073 crio[680]: time="2024-07-31 19:54:12.658435104Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8f5b9da6-cd04-4a4b-aef2-107e7cfa4097 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:54:12 ha-235073 crio[680]: time="2024-07-31 19:54:12.658872447Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722455652658851304,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8f5b9da6-cd04-4a4b-aef2-107e7cfa4097 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:54:12 ha-235073 crio[680]: time="2024-07-31 19:54:12.659678716Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e5ebb3a6-f5f9-41e9-a631-c9c63e7c53c8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:54:12 ha-235073 crio[680]: time="2024-07-31 19:54:12.659767438Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e5ebb3a6-f5f9-41e9-a631-c9c63e7c53c8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:54:12 ha-235073 crio[680]: time="2024-07-31 19:54:12.660264371Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:36d67125ccdbad5f98a9142c81bc6585651ec4059eed554dfbe1f5cb5be99c60,PodSandboxId:6c4d1efc4989e0b2aa28c3cbda2d3f5d4b5e9252f3d7895297f54307dc7ab9f6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722455438711049436,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-g9vds,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1b34d06-e944-4236-afe0-1ee06ba4e666,},Annotations:map[string]string{io.kubernetes.container.hash: 212b9e5e,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ddbd3f3cc5f71da35c6165c799a35b8d72e224de7a0c4687a173f4de880f22,PodSandboxId:55ec4971c2e64ac2d6f9784d622423e7577ab445afbc4f722a284ada62c68dc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722455228102873852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2w7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47597b4-a38b-438c-9c3b-8f7f45130f75,},Annotations:map[string]string{io.kubernetes.container.hash: b29b8cc6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eba82db411e0f901dff59f98c9e5ae0d5213285233844742c5879ce5b6232f35,PodSandboxId:714a1d887a6e7a6aa0abbfaae3c16b878224596f43f32beb43f080809e9ffd58,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722455228083526798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 9cd9bb70-badc-4b4b-a135-62644edac7dd,},Annotations:map[string]string{io.kubernetes.container.hash: b5b39576,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30540ee956135e961a2eeabdc4f234f18455f75bb21b66afeef232ca2805dd90,PodSandboxId:231aebfc0631bd83287e33a69afbca01d1373f4b939d8bf65f4ac794aaf52012,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722455228031037861,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f7dzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9549b5d7-bb
23-4934-883b-dd07f8d864d8,},Annotations:map[string]string{io.kubernetes.container.hash: b9bec004,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee50c4b9e239496652640464181ee3dd4a9d21d5a5123d1d17fbb7a50e29dc1a,PodSandboxId:feeccc2a1a3e7406eaea5ea90a171a8661853a60ee426f70434f26aac0f00112,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722455215945081182,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6mpsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1910ba34-e9d4-4fb4-9f2b-6b00dad3a3ef,},Annotations:map[string]string{io.kubernetes.container.hash: 390f5316,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8811952c6253860c880f1b6a403fc858c3e6399e99e9a825cf0b05f791ebf3ac,PodSandboxId:dbf6b114c5cb5c19782cdc6e76a5915ae8e26cd82af58e80687fa0f5d3e199fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172245521
1859729190,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-td8j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b836edfa-4df1-40e4-a58a-3f23afd5b78b,},Annotations:map[string]string{io.kubernetes.container.hash: 33a934d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c31d2ba10cadb13f4b888c49e2a6934e94344684dfc2adf6833c2d1dc0993929,PodSandboxId:1174f1364f26d10dc051aa73fa255a606ad9bf503fcd115b3a9cbc5ca9742116,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17224551953
39861966,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b9131be600867c5ba2b1d2ffd206e40,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d642debf242fb07cc792132a55eddbb1c15f26311a8d754a5d8fe06c8b598ae,PodSandboxId:58bfb1289eb0474e39bf94aa0b78076816c68c357787822692fe3617a82226cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722455191497802122,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b69e7963c2d0df8833554e4876687c49,},Annotations:map[string]string{io.kubernetes.container.hash: d16169dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216984c6b7d59f01f104316e0505bcc5baf47b8d8b5b230f6641c7dd73533498,PodSandboxId:c9f1bb2690babd5d10c24de1404c57fafca2597594caac9dd0c659d72a8b552d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722455191481530968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adddf646550f2cb39fef0b7f6c02c656,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0877f308475d05ee771157aab5de9f3da07eec38a21c9a74d76bde2eb4de77,PodSandboxId:d2fb34888cbe775dce80bba1d1d7d8b4559159e4e1a7e8694d7d5e67f5d58e2f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722455191397982754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: ku
be-apiserver-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a308afa2b137aad975d5e22dcabd17,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6ae1a1aafd356067a53de9e770b37736ea4c621cb6bf29821cca1c4488aa31e,PodSandboxId:13ce57fab67b3276bebda32167ce6dffb6760a77b9289da77056562f62051eda,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722455191372229593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f5277261cc3e0ac6eb43af812478f1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e5ebb3a6-f5f9-41e9-a631-c9c63e7c53c8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:54:12 ha-235073 crio[680]: time="2024-07-31 19:54:12.673898325Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=e05b50d5-21b3-4df9-9383-9144e21ddf2f name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 19:54:12 ha-235073 crio[680]: time="2024-07-31 19:54:12.674195550Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6c4d1efc4989e0b2aa28c3cbda2d3f5d4b5e9252f3d7895297f54307dc7ab9f6,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-g9vds,Uid:d1b34d06-e944-4236-afe0-1ee06ba4e666,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722455435997843904,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-g9vds,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1b34d06-e944-4236-afe0-1ee06ba4e666,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T19:50:35.669326663Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:55ec4971c2e64ac2d6f9784d622423e7577ab445afbc4f722a284ada62c68dc6,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-d2w7q,Uid:c47597b4-a38b-438c-9c3b-8f7f45130f75,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1722455227832659667,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2w7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47597b4-a38b-438c-9c3b-8f7f45130f75,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T19:47:07.499047535Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:714a1d887a6e7a6aa0abbfaae3c16b878224596f43f32beb43f080809e9ffd58,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:9cd9bb70-badc-4b4b-a135-62644edac7dd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722455227808366049,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cd9bb70-badc-4b4b-a135-62644edac7dd,},Annotations:map[string]string{kubec
tl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-31T19:47:07.496502006Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:231aebfc0631bd83287e33a69afbca01d1373f4b939d8bf65f4ac794aaf52012,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-f7dzt,Uid:9549b5d7-bb23-4934-883b-dd07f8d864d8,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1722455227793970580,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-f7dzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9549b5d7-bb23-4934-883b-dd07f8d864d8,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T19:47:07.486377722Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:feeccc2a1a3e7406eaea5ea90a171a8661853a60ee426f70434f26aac0f00112,Metadata:&PodSandboxMetadata{Name:kindnet-6mpsn,Uid:1910ba34-e9d4-4fb4-9f2b-6b00dad3a3ef,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722455211676813166,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-6mpsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1910ba34-e9d4-4fb4-9f2b-6b00dad3a3ef,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-07-31T19:46:51.366586953Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dbf6b114c5cb5c19782cdc6e76a5915ae8e26cd82af58e80687fa0f5d3e199fb,Metadata:&PodSandboxMetadata{Name:kube-proxy-td8j2,Uid:b836edfa-4df1-40e4-a58a-3f23afd5b78b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722455211654295952,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-td8j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b836edfa-4df1-40e4-a58a-3f23afd5b78b,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T19:46:51.334353916Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d2fb34888cbe775dce80bba1d1d7d8b4559159e4e1a7e8694d7d5e67f5d58e2f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-235073,Uid:51a308afa2b137aad975d5e22dcabd17,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1722455191219545436,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a308afa2b137aad975d5e22dcabd17,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.146:8443,kubernetes.io/config.hash: 51a308afa2b137aad975d5e22dcabd17,kubernetes.io/config.seen: 2024-07-31T19:46:30.727830784Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1174f1364f26d10dc051aa73fa255a606ad9bf503fcd115b3a9cbc5ca9742116,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-235073,Uid:8b9131be600867c5ba2b1d2ffd206e40,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722455191218614695,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b9131be600
867c5ba2b1d2ffd206e40,},Annotations:map[string]string{kubernetes.io/config.hash: 8b9131be600867c5ba2b1d2ffd206e40,kubernetes.io/config.seen: 2024-07-31T19:46:30.727833685Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c9f1bb2690babd5d10c24de1404c57fafca2597594caac9dd0c659d72a8b552d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-235073,Uid:adddf646550f2cb39fef0b7f6c02c656,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722455191218205727,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adddf646550f2cb39fef0b7f6c02c656,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: adddf646550f2cb39fef0b7f6c02c656,kubernetes.io/config.seen: 2024-07-31T19:46:30.727832848Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:13ce57fab67b3276bebda32167ce6dffb6760a77b9289da77056562f62051eda,Met
adata:&PodSandboxMetadata{Name:kube-controller-manager-ha-235073,Uid:16f5277261cc3e0ac6eb43af812478f1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722455191199994583,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f5277261cc3e0ac6eb43af812478f1,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 16f5277261cc3e0ac6eb43af812478f1,kubernetes.io/config.seen: 2024-07-31T19:46:30.727831947Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:58bfb1289eb0474e39bf94aa0b78076816c68c357787822692fe3617a82226cc,Metadata:&PodSandboxMetadata{Name:etcd-ha-235073,Uid:b69e7963c2d0df8833554e4876687c49,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722455191196315824,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-235073,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b69e7963c2d0df8833554e4876687c49,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.146:2379,kubernetes.io/config.hash: b69e7963c2d0df8833554e4876687c49,kubernetes.io/config.seen: 2024-07-31T19:46:30.727827439Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=e05b50d5-21b3-4df9-9383-9144e21ddf2f name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 19:54:12 ha-235073 crio[680]: time="2024-07-31 19:54:12.674825276Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c57b1b27-0370-4268-993c-a8a66aa43a2b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:54:12 ha-235073 crio[680]: time="2024-07-31 19:54:12.674892922Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c57b1b27-0370-4268-993c-a8a66aa43a2b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:54:12 ha-235073 crio[680]: time="2024-07-31 19:54:12.675359626Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:36d67125ccdbad5f98a9142c81bc6585651ec4059eed554dfbe1f5cb5be99c60,PodSandboxId:6c4d1efc4989e0b2aa28c3cbda2d3f5d4b5e9252f3d7895297f54307dc7ab9f6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722455438711049436,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-g9vds,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1b34d06-e944-4236-afe0-1ee06ba4e666,},Annotations:map[string]string{io.kubernetes.container.hash: 212b9e5e,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ddbd3f3cc5f71da35c6165c799a35b8d72e224de7a0c4687a173f4de880f22,PodSandboxId:55ec4971c2e64ac2d6f9784d622423e7577ab445afbc4f722a284ada62c68dc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722455228102873852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2w7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47597b4-a38b-438c-9c3b-8f7f45130f75,},Annotations:map[string]string{io.kubernetes.container.hash: b29b8cc6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eba82db411e0f901dff59f98c9e5ae0d5213285233844742c5879ce5b6232f35,PodSandboxId:714a1d887a6e7a6aa0abbfaae3c16b878224596f43f32beb43f080809e9ffd58,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722455228083526798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 9cd9bb70-badc-4b4b-a135-62644edac7dd,},Annotations:map[string]string{io.kubernetes.container.hash: b5b39576,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30540ee956135e961a2eeabdc4f234f18455f75bb21b66afeef232ca2805dd90,PodSandboxId:231aebfc0631bd83287e33a69afbca01d1373f4b939d8bf65f4ac794aaf52012,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722455228031037861,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f7dzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9549b5d7-bb
23-4934-883b-dd07f8d864d8,},Annotations:map[string]string{io.kubernetes.container.hash: b9bec004,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee50c4b9e239496652640464181ee3dd4a9d21d5a5123d1d17fbb7a50e29dc1a,PodSandboxId:feeccc2a1a3e7406eaea5ea90a171a8661853a60ee426f70434f26aac0f00112,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722455215945081182,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6mpsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1910ba34-e9d4-4fb4-9f2b-6b00dad3a3ef,},Annotations:map[string]string{io.kubernetes.container.hash: 390f5316,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8811952c6253860c880f1b6a403fc858c3e6399e99e9a825cf0b05f791ebf3ac,PodSandboxId:dbf6b114c5cb5c19782cdc6e76a5915ae8e26cd82af58e80687fa0f5d3e199fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172245521
1859729190,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-td8j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b836edfa-4df1-40e4-a58a-3f23afd5b78b,},Annotations:map[string]string{io.kubernetes.container.hash: 33a934d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c31d2ba10cadb13f4b888c49e2a6934e94344684dfc2adf6833c2d1dc0993929,PodSandboxId:1174f1364f26d10dc051aa73fa255a606ad9bf503fcd115b3a9cbc5ca9742116,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17224551953
39861966,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b9131be600867c5ba2b1d2ffd206e40,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d642debf242fb07cc792132a55eddbb1c15f26311a8d754a5d8fe06c8b598ae,PodSandboxId:58bfb1289eb0474e39bf94aa0b78076816c68c357787822692fe3617a82226cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722455191497802122,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b69e7963c2d0df8833554e4876687c49,},Annotations:map[string]string{io.kubernetes.container.hash: d16169dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216984c6b7d59f01f104316e0505bcc5baf47b8d8b5b230f6641c7dd73533498,PodSandboxId:c9f1bb2690babd5d10c24de1404c57fafca2597594caac9dd0c659d72a8b552d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722455191481530968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adddf646550f2cb39fef0b7f6c02c656,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0877f308475d05ee771157aab5de9f3da07eec38a21c9a74d76bde2eb4de77,PodSandboxId:d2fb34888cbe775dce80bba1d1d7d8b4559159e4e1a7e8694d7d5e67f5d58e2f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722455191397982754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: ku
be-apiserver-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a308afa2b137aad975d5e22dcabd17,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6ae1a1aafd356067a53de9e770b37736ea4c621cb6bf29821cca1c4488aa31e,PodSandboxId:13ce57fab67b3276bebda32167ce6dffb6760a77b9289da77056562f62051eda,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722455191372229593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f5277261cc3e0ac6eb43af812478f1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c57b1b27-0370-4268-993c-a8a66aa43a2b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	36d67125ccdba       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   6c4d1efc4989e       busybox-fc5497c4f-g9vds
	a9ddbd3f3cc5f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   55ec4971c2e64       coredns-7db6d8ff4d-d2w7q
	eba82db411e0f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   714a1d887a6e7       storage-provisioner
	30540ee956135       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   231aebfc0631b       coredns-7db6d8ff4d-f7dzt
	ee50c4b9e2394       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    7 minutes ago       Running             kindnet-cni               0                   feeccc2a1a3e7       kindnet-6mpsn
	8811952c62538       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      7 minutes ago       Running             kube-proxy                0                   dbf6b114c5cb5       kube-proxy-td8j2
	c31d2ba10cadb       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   1174f1364f26d       kube-vip-ha-235073
	9d642debf242f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   58bfb1289eb04       etcd-ha-235073
	216984c6b7d59       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      7 minutes ago       Running             kube-scheduler            0                   c9f1bb2690bab       kube-scheduler-ha-235073
	cf0877f308475       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      7 minutes ago       Running             kube-apiserver            0                   d2fb34888cbe7       kube-apiserver-ha-235073
	c6ae1a1aafd35       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      7 minutes ago       Running             kube-controller-manager   0                   13ce57fab67b3       kube-controller-manager-ha-235073
	
	
	==> coredns [30540ee956135e961a2eeabdc4f234f18455f75bb21b66afeef232ca2805dd90] <==
	[INFO] 10.244.2.2:36658 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216258s
	[INFO] 10.244.2.2:43101 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.00049202s
	[INFO] 10.244.1.2:41993 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131309s
	[INFO] 10.244.1.2:58295 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000204788s
	[INFO] 10.244.1.2:43074 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000178134s
	[INFO] 10.244.1.2:46950 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000165895s
	[INFO] 10.244.1.2:60484 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143802s
	[INFO] 10.244.0.4:58480 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129276s
	[INFO] 10.244.2.2:36458 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001308986s
	[INFO] 10.244.2.2:48644 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094253s
	[INFO] 10.244.1.2:34972 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151042s
	[INFO] 10.244.1.2:32819 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096017s
	[INFO] 10.244.1.2:48157 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075225s
	[INFO] 10.244.0.4:54613 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084738s
	[INFO] 10.244.0.4:60576 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000829s
	[INFO] 10.244.2.2:36544 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164516s
	[INFO] 10.244.2.2:45708 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142016s
	[INFO] 10.244.2.2:40736 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110815s
	[INFO] 10.244.2.2:36751 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104862s
	[INFO] 10.244.1.2:54006 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000448605s
	[INFO] 10.244.1.2:59479 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121156s
	[INFO] 10.244.0.4:33169 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000051358s
	[INFO] 10.244.2.2:44195 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135177s
	[INFO] 10.244.2.2:36586 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000153451s
	[INFO] 10.244.2.2:56302 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000124509s
	
	
	==> coredns [a9ddbd3f3cc5f71da35c6165c799a35b8d72e224de7a0c4687a173f4de880f22] <==
	[INFO] 10.244.1.2:40987 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.006648733s
	[INFO] 10.244.1.2:56046 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000294533s
	[INFO] 10.244.1.2:34815 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.013751047s
	[INFO] 10.244.0.4:38669 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014517s
	[INFO] 10.244.0.4:47964 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002016491s
	[INFO] 10.244.0.4:48652 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000071321s
	[INFO] 10.244.0.4:47729 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081815s
	[INFO] 10.244.0.4:55084 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001248993s
	[INFO] 10.244.0.4:57805 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076977s
	[INFO] 10.244.0.4:57456 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085752s
	[INFO] 10.244.2.2:38902 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010955s
	[INFO] 10.244.2.2:36166 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001717036s
	[INFO] 10.244.2.2:32959 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000086137s
	[INFO] 10.244.2.2:56090 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000064343s
	[INFO] 10.244.2.2:53218 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067616s
	[INFO] 10.244.2.2:56028 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000210727s
	[INFO] 10.244.1.2:41979 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111285s
	[INFO] 10.244.0.4:50255 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014493s
	[INFO] 10.244.0.4:37511 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000157288s
	[INFO] 10.244.1.2:42868 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000222662s
	[INFO] 10.244.1.2:42728 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000124693s
	[INFO] 10.244.0.4:54532 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00008837s
	[INFO] 10.244.0.4:52959 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000063732s
	[INFO] 10.244.0.4:56087 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000045645s
	[INFO] 10.244.2.2:42350 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000130124s
	
	
	==> describe nodes <==
	Name:               ha-235073
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-235073
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825
	                    minikube.k8s.io/name=ha-235073
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T19_46_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 19:46:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-235073
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 19:54:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 19:51:13 +0000   Wed, 31 Jul 2024 19:46:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 19:51:13 +0000   Wed, 31 Jul 2024 19:46:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 19:51:13 +0000   Wed, 31 Jul 2024 19:46:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 19:51:13 +0000   Wed, 31 Jul 2024 19:47:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.146
	  Hostname:    ha-235073
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e35869b5bfb347c6a5e12e63b257d2a1
	  System UUID:                e35869b5-bfb3-47c6-a5e1-2e63b257d2a1
	  Boot ID:                    846162a9-11ef-48d0-b284-9320ff7be7d0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-g9vds              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	  kube-system                 coredns-7db6d8ff4d-d2w7q             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m21s
	  kube-system                 coredns-7db6d8ff4d-f7dzt             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m21s
	  kube-system                 etcd-ha-235073                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m35s
	  kube-system                 kindnet-6mpsn                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m21s
	  kube-system                 kube-apiserver-ha-235073             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  kube-system                 kube-controller-manager-ha-235073    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  kube-system                 kube-proxy-td8j2                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m21s
	  kube-system                 kube-scheduler-ha-235073             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  kube-system                 kube-vip-ha-235073                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m20s  kube-proxy       
	  Normal  Starting                 7m35s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m35s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m35s  kubelet          Node ha-235073 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m35s  kubelet          Node ha-235073 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m35s  kubelet          Node ha-235073 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m22s  node-controller  Node ha-235073 event: Registered Node ha-235073 in Controller
	  Normal  NodeReady                7m5s   kubelet          Node ha-235073 status is now: NodeReady
	  Normal  RegisteredNode           5m9s   node-controller  Node ha-235073 event: Registered Node ha-235073 in Controller
	  Normal  RegisteredNode           3m50s  node-controller  Node ha-235073 event: Registered Node ha-235073 in Controller
	
	
	Name:               ha-235073-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-235073-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825
	                    minikube.k8s.io/name=ha-235073
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T19_48_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 19:48:46 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-235073-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 19:51:49 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 31 Jul 2024 19:50:49 +0000   Wed, 31 Jul 2024 19:52:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 31 Jul 2024 19:50:49 +0000   Wed, 31 Jul 2024 19:52:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 31 Jul 2024 19:50:49 +0000   Wed, 31 Jul 2024 19:52:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 31 Jul 2024 19:50:49 +0000   Wed, 31 Jul 2024 19:52:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-235073-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 55b090e5d4e04e9e843bceddcf4718db
	  System UUID:                55b090e5-d4e0-4e9e-843b-ceddcf4718db
	  Boot ID:                    60d7bb83-3d4a-4e10-bd0e-552a47937425
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-d7lpt                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 etcd-ha-235073-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m25s
	  kube-system                 kindnet-v5g92                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m27s
	  kube-system                 kube-apiserver-ha-235073-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	  kube-system                 kube-controller-manager-ha-235073-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m20s
	  kube-system                 kube-proxy-4g5ws                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m27s
	  kube-system                 kube-scheduler-ha-235073-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	  kube-system                 kube-vip-ha-235073-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m21s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m27s (x8 over 5m27s)  kubelet          Node ha-235073-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m27s (x8 over 5m27s)  kubelet          Node ha-235073-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m27s (x7 over 5m27s)  kubelet          Node ha-235073-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m23s                  node-controller  Node ha-235073-m02 event: Registered Node ha-235073-m02 in Controller
	  Normal  RegisteredNode           5m10s                  node-controller  Node ha-235073-m02 event: Registered Node ha-235073-m02 in Controller
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-235073-m02 event: Registered Node ha-235073-m02 in Controller
	  Normal  NodeNotReady             103s                   node-controller  Node ha-235073-m02 status is now: NodeNotReady
	
	
	Name:               ha-235073-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-235073-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825
	                    minikube.k8s.io/name=ha-235073
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T19_50_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 19:50:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-235073-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 19:54:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 19:51:06 +0000   Wed, 31 Jul 2024 19:50:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 19:51:06 +0000   Wed, 31 Jul 2024 19:50:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 19:51:06 +0000   Wed, 31 Jul 2024 19:50:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 19:51:06 +0000   Wed, 31 Jul 2024 19:50:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.136
	  Hostname:    ha-235073-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e9779a35d74b41fd9b9796249c8a5396
	  System UUID:                e9779a35-d74b-41fd-9b97-96249c8a5396
	  Boot ID:                    e1dbb3c6-f968-4c2c-9a34-7c1181741d49
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wqc9h                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 etcd-ha-235073-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m6s
	  kube-system                 kindnet-964d5                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m8s
	  kube-system                 kube-apiserver-ha-235073-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 kube-controller-manager-ha-235073-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 kube-proxy-mkrmt                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-scheduler-ha-235073-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  kube-system                 kube-vip-ha-235073-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m3s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  4m8s (x8 over 4m8s)  kubelet          Node ha-235073-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x8 over 4m8s)  kubelet          Node ha-235073-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x7 over 4m8s)  kubelet          Node ha-235073-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m5s                 node-controller  Node ha-235073-m03 event: Registered Node ha-235073-m03 in Controller
	  Normal  RegisteredNode           4m3s                 node-controller  Node ha-235073-m03 event: Registered Node ha-235073-m03 in Controller
	  Normal  RegisteredNode           3m51s                node-controller  Node ha-235073-m03 event: Registered Node ha-235073-m03 in Controller
	
	
	Name:               ha-235073-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-235073-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825
	                    minikube.k8s.io/name=ha-235073
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T19_51_11_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 19:51:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-235073-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 19:54:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 19:51:42 +0000   Wed, 31 Jul 2024 19:51:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 19:51:42 +0000   Wed, 31 Jul 2024 19:51:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 19:51:42 +0000   Wed, 31 Jul 2024 19:51:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 19:51:42 +0000   Wed, 31 Jul 2024 19:51:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.62
	  Hostname:    ha-235073-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f0f8c10839cf446c8b0628fe1b69511a
	  System UUID:                f0f8c108-39cf-446c-8b06-28fe1b69511a
	  Boot ID:                    543e1880-ee64-4732-a58d-5bb5b1549018
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2gzbj       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m2s
	  kube-system                 kube-proxy-jb89g    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m3s (x2 over 3m3s)  kubelet          Node ha-235073-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m3s (x2 over 3m3s)  kubelet          Node ha-235073-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m3s (x2 over 3m3s)  kubelet          Node ha-235073-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-235073-m04 event: Registered Node ha-235073-m04 in Controller
	  Normal  RegisteredNode           3m                   node-controller  Node ha-235073-m04 event: Registered Node ha-235073-m04 in Controller
	  Normal  RegisteredNode           2m58s                node-controller  Node ha-235073-m04 event: Registered Node ha-235073-m04 in Controller
	  Normal  NodeReady                2m42s                kubelet          Node ha-235073-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul31 19:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051288] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039898] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.757176] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.430789] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.593096] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.152359] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.063310] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060385] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.158302] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.127644] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.264376] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.129943] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +5.303318] systemd-fstab-generator[955]: Ignoring "noauto" option for root device
	[  +0.056828] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.179861] kauditd_printk_skb: 74 callbacks suppressed
	[  +2.138103] systemd-fstab-generator[1381]: Ignoring "noauto" option for root device
	[  +5.414223] kauditd_printk_skb: 23 callbacks suppressed
	[ +13.822229] kauditd_printk_skb: 34 callbacks suppressed
	[Jul31 19:48] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [9d642debf242fb07cc792132a55eddbb1c15f26311a8d754a5d8fe06c8b598ae] <==
	{"level":"warn","ts":"2024-07-31T19:54:12.948535Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:54:12.953393Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:54:12.971544Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:54:12.975919Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:54:12.984663Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:54:12.992414Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:54:12.997175Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:54:13.001383Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:54:13.011378Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:54:13.017404Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:54:13.024206Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:54:13.027304Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:54:13.030538Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:54:13.037702Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:54:13.041812Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:54:13.043705Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:54:13.049456Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:54:13.052522Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:54:13.055149Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:54:13.060415Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:54:13.066437Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:54:13.074637Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:54:13.080721Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:54:13.126469Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:54:13.128064Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:54:13 up 8 min,  0 users,  load average: 0.57, 0.50, 0.25
	Linux ha-235073 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ee50c4b9e239496652640464181ee3dd4a9d21d5a5123d1d17fbb7a50e29dc1a] <==
	I0731 19:53:37.003741       1 main.go:299] handling current node
	I0731 19:53:47.002830       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0731 19:53:47.002874       1 main.go:322] Node ha-235073-m04 has CIDR [10.244.3.0/24] 
	I0731 19:53:47.003029       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0731 19:53:47.003058       1 main.go:299] handling current node
	I0731 19:53:47.003076       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0731 19:53:47.003090       1 main.go:322] Node ha-235073-m02 has CIDR [10.244.1.0/24] 
	I0731 19:53:47.003209       1 main.go:295] Handling node with IPs: map[192.168.39.136:{}]
	I0731 19:53:47.003232       1 main.go:322] Node ha-235073-m03 has CIDR [10.244.2.0/24] 
	I0731 19:53:56.996041       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0731 19:53:56.996152       1 main.go:299] handling current node
	I0731 19:53:56.996176       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0731 19:53:56.996185       1 main.go:322] Node ha-235073-m02 has CIDR [10.244.1.0/24] 
	I0731 19:53:56.996366       1 main.go:295] Handling node with IPs: map[192.168.39.136:{}]
	I0731 19:53:56.996400       1 main.go:322] Node ha-235073-m03 has CIDR [10.244.2.0/24] 
	I0731 19:53:56.996527       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0731 19:53:56.996560       1 main.go:322] Node ha-235073-m04 has CIDR [10.244.3.0/24] 
	I0731 19:54:07.002633       1 main.go:295] Handling node with IPs: map[192.168.39.136:{}]
	I0731 19:54:07.002766       1 main.go:322] Node ha-235073-m03 has CIDR [10.244.2.0/24] 
	I0731 19:54:07.002964       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0731 19:54:07.003005       1 main.go:322] Node ha-235073-m04 has CIDR [10.244.3.0/24] 
	I0731 19:54:07.003093       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0731 19:54:07.003210       1 main.go:299] handling current node
	I0731 19:54:07.003236       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0731 19:54:07.003253       1 main.go:322] Node ha-235073-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [cf0877f308475d05ee771157aab5de9f3da07eec38a21c9a74d76bde2eb4de77] <==
	I0731 19:46:37.714095       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0731 19:46:37.873030       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 19:46:51.204370       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0731 19:46:51.305256       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	http2: server: error reading preface from client 192.168.39.102:59490: read tcp 192.168.39.254:8443->192.168.39.102:59490: read: connection reset by peer
	E0731 19:48:47.430717       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0731 19:48:47.430864       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0731 19:48:47.431527       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 606.959µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0731 19:48:47.432550       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0731 19:48:47.433972       1 timeout.go:142] post-timeout activity - time-elapsed: 3.403318ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0731 19:50:40.305962       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46490: use of closed network connection
	E0731 19:50:40.493191       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46494: use of closed network connection
	E0731 19:50:40.684978       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46520: use of closed network connection
	E0731 19:50:40.882497       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46548: use of closed network connection
	E0731 19:50:41.081570       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46572: use of closed network connection
	E0731 19:50:41.267294       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46596: use of closed network connection
	E0731 19:50:41.455958       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46624: use of closed network connection
	E0731 19:50:41.625324       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46638: use of closed network connection
	E0731 19:50:41.831477       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46654: use of closed network connection
	E0731 19:50:42.138349       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46682: use of closed network connection
	E0731 19:50:42.310504       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46700: use of closed network connection
	E0731 19:50:42.500321       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46714: use of closed network connection
	E0731 19:50:42.724185       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46732: use of closed network connection
	E0731 19:50:42.920677       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46738: use of closed network connection
	E0731 19:50:43.096888       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46750: use of closed network connection
	
	
	==> kube-controller-manager [c6ae1a1aafd356067a53de9e770b37736ea4c621cb6bf29821cca1c4488aa31e] <==
	I0731 19:50:35.685216       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="117.719092ms"
	I0731 19:50:35.833018       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="147.712557ms"
	I0731 19:50:36.180784       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="347.482919ms"
	E0731 19:50:36.180839       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0731 19:50:36.180922       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.51µs"
	I0731 19:50:36.186765       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.646µs"
	I0731 19:50:36.523477       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.795µs"
	I0731 19:50:36.809473       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.869µs"
	I0731 19:50:36.821770       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.655µs"
	I0731 19:50:36.830199       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.082µs"
	I0731 19:50:39.010400       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.03828ms"
	I0731 19:50:39.011494       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="121.234µs"
	I0731 19:50:39.619866       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.111078ms"
	I0731 19:50:39.621157       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="79.965µs"
	I0731 19:50:39.873592       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.023382ms"
	I0731 19:50:39.873727       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.946µs"
	E0731 19:51:10.864582       1 certificate_controller.go:146] Sync csr-w4j2z failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-w4j2z": the object has been modified; please apply your changes to the latest version and try again
	E0731 19:51:10.867469       1 certificate_controller.go:146] Sync csr-w4j2z failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-w4j2z": the object has been modified; please apply your changes to the latest version and try again
	I0731 19:51:11.132927       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-235073-m04\" does not exist"
	I0731 19:51:11.164551       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-235073-m04" podCIDRs=["10.244.3.0/24"]
	I0731 19:51:15.423430       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-235073-m04"
	I0731 19:51:31.835746       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-235073-m04"
	I0731 19:52:30.454615       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-235073-m04"
	I0731 19:52:30.586538       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.963057ms"
	I0731 19:52:30.586646       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.015µs"
	
	
	==> kube-proxy [8811952c6253860c880f1b6a403fc858c3e6399e99e9a825cf0b05f791ebf3ac] <==
	I0731 19:46:52.073670       1 server_linux.go:69] "Using iptables proxy"
	I0731 19:46:52.091608       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.146"]
	I0731 19:46:52.151680       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 19:46:52.151738       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 19:46:52.151756       1 server_linux.go:165] "Using iptables Proxier"
	I0731 19:46:52.154737       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 19:46:52.155285       1 server.go:872] "Version info" version="v1.30.3"
	I0731 19:46:52.155345       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 19:46:52.157051       1 config.go:192] "Starting service config controller"
	I0731 19:46:52.157340       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 19:46:52.157391       1 config.go:101] "Starting endpoint slice config controller"
	I0731 19:46:52.157396       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 19:46:52.158566       1 config.go:319] "Starting node config controller"
	I0731 19:46:52.158594       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 19:46:52.258407       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 19:46:52.258494       1 shared_informer.go:320] Caches are synced for service config
	I0731 19:46:52.258668       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [216984c6b7d59f01f104316e0505bcc5baf47b8d8b5b230f6641c7dd73533498] <==
	W0731 19:46:35.520688       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 19:46:35.520783       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 19:46:35.524897       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 19:46:35.524922       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 19:46:35.652443       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 19:46:35.652488       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 19:46:35.678400       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 19:46:35.678489       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 19:46:35.733213       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 19:46:35.733261       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 19:46:35.752795       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 19:46:35.752877       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 19:46:35.800454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 19:46:35.800545       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 19:46:35.847461       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 19:46:35.847546       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0731 19:46:36.184727       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0731 19:51:11.217044       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-2gzbj\": pod kindnet-2gzbj is already assigned to node \"ha-235073-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-2gzbj" node="ha-235073-m04"
	E0731 19:51:11.217254       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod fd812d3e-fad7-43de-bab9-896c55ee3194(kube-system/kindnet-2gzbj) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-2gzbj"
	E0731 19:51:11.217292       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-2gzbj\": pod kindnet-2gzbj is already assigned to node \"ha-235073-m04\"" pod="kube-system/kindnet-2gzbj"
	I0731 19:51:11.217317       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-2gzbj" node="ha-235073-m04"
	E0731 19:51:11.217734       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-jb89g\": pod kube-proxy-jb89g is already assigned to node \"ha-235073-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-jb89g" node="ha-235073-m04"
	E0731 19:51:11.217852       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 2bc1d841-cf7f-44ff-825f-bad1f2fd0ead(kube-system/kube-proxy-jb89g) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-jb89g"
	E0731 19:51:11.218006       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-jb89g\": pod kube-proxy-jb89g is already assigned to node \"ha-235073-m04\"" pod="kube-system/kube-proxy-jb89g"
	I0731 19:51:11.218144       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-jb89g" node="ha-235073-m04"
	
	
	==> kubelet <==
	Jul 31 19:50:35 ha-235073 kubelet[1388]: I0731 19:50:35.825891    1388 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz4zp\" (UniqueName: \"kubernetes.io/projected/8a99d9a7-f010-4dad-a31b-69b915c4d92d-kube-api-access-gz4zp\") pod \"busybox-fc5497c4f-gh7w4\" (UID: \"8a99d9a7-f010-4dad-a31b-69b915c4d92d\") " pod="default/busybox-fc5497c4f-gh7w4"
	Jul 31 19:50:36 ha-235073 kubelet[1388]: I0731 19:50:36.132369    1388 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gz4zp\" (UniqueName: \"kubernetes.io/projected/8a99d9a7-f010-4dad-a31b-69b915c4d92d-kube-api-access-gz4zp\") pod \"8a99d9a7-f010-4dad-a31b-69b915c4d92d\" (UID: \"8a99d9a7-f010-4dad-a31b-69b915c4d92d\") "
	Jul 31 19:50:36 ha-235073 kubelet[1388]: I0731 19:50:36.137222    1388 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a99d9a7-f010-4dad-a31b-69b915c4d92d-kube-api-access-gz4zp" (OuterVolumeSpecName: "kube-api-access-gz4zp") pod "8a99d9a7-f010-4dad-a31b-69b915c4d92d" (UID: "8a99d9a7-f010-4dad-a31b-69b915c4d92d"). InnerVolumeSpecName "kube-api-access-gz4zp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 31 19:50:36 ha-235073 kubelet[1388]: I0731 19:50:36.233537    1388 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-gz4zp\" (UniqueName: \"kubernetes.io/projected/8a99d9a7-f010-4dad-a31b-69b915c4d92d-kube-api-access-gz4zp\") on node \"ha-235073\" DevicePath \"\""
	Jul 31 19:50:37 ha-235073 kubelet[1388]: I0731 19:50:37.823896    1388 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a99d9a7-f010-4dad-a31b-69b915c4d92d" path="/var/lib/kubelet/pods/8a99d9a7-f010-4dad-a31b-69b915c4d92d/volumes"
	Jul 31 19:50:37 ha-235073 kubelet[1388]: E0731 19:50:37.839833    1388 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 19:50:37 ha-235073 kubelet[1388]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 19:50:37 ha-235073 kubelet[1388]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 19:50:37 ha-235073 kubelet[1388]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 19:50:37 ha-235073 kubelet[1388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 19:51:37 ha-235073 kubelet[1388]: E0731 19:51:37.843335    1388 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 19:51:37 ha-235073 kubelet[1388]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 19:51:37 ha-235073 kubelet[1388]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 19:51:37 ha-235073 kubelet[1388]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 19:51:37 ha-235073 kubelet[1388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 19:52:37 ha-235073 kubelet[1388]: E0731 19:52:37.845506    1388 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 19:52:37 ha-235073 kubelet[1388]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 19:52:37 ha-235073 kubelet[1388]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 19:52:37 ha-235073 kubelet[1388]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 19:52:37 ha-235073 kubelet[1388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 19:53:37 ha-235073 kubelet[1388]: E0731 19:53:37.842283    1388 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 19:53:37 ha-235073 kubelet[1388]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 19:53:37 ha-235073 kubelet[1388]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 19:53:37 ha-235073 kubelet[1388]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 19:53:37 ha-235073 kubelet[1388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-235073 -n ha-235073
helpers_test.go:261: (dbg) Run:  kubectl --context ha-235073 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (58.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-235073 status -v=7 --alsologtostderr: exit status 3 (3.196172804s)

                                                
                                                
-- stdout --
	ha-235073
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-235073-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-235073-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-235073-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 19:54:17.683322  144924 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:54:17.683435  144924 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:54:17.683444  144924 out.go:304] Setting ErrFile to fd 2...
	I0731 19:54:17.683448  144924 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:54:17.683609  144924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 19:54:17.683779  144924 out.go:298] Setting JSON to false
	I0731 19:54:17.683802  144924 mustload.go:65] Loading cluster: ha-235073
	I0731 19:54:17.683897  144924 notify.go:220] Checking for updates...
	I0731 19:54:17.684156  144924 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:54:17.684170  144924 status.go:255] checking status of ha-235073 ...
	I0731 19:54:17.684515  144924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:17.684575  144924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:17.702392  144924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46677
	I0731 19:54:17.702828  144924 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:17.703503  144924 main.go:141] libmachine: Using API Version  1
	I0731 19:54:17.703527  144924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:17.703864  144924 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:17.704034  144924 main.go:141] libmachine: (ha-235073) Calling .GetState
	I0731 19:54:17.705598  144924 status.go:330] ha-235073 host status = "Running" (err=<nil>)
	I0731 19:54:17.705624  144924 host.go:66] Checking if "ha-235073" exists ...
	I0731 19:54:17.706082  144924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:17.706140  144924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:17.721785  144924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39547
	I0731 19:54:17.722192  144924 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:17.722642  144924 main.go:141] libmachine: Using API Version  1
	I0731 19:54:17.722660  144924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:17.723008  144924 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:17.723179  144924 main.go:141] libmachine: (ha-235073) Calling .GetIP
	I0731 19:54:17.725925  144924 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:54:17.726331  144924 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:54:17.726358  144924 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:54:17.726475  144924 host.go:66] Checking if "ha-235073" exists ...
	I0731 19:54:17.726871  144924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:17.726942  144924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:17.742057  144924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38037
	I0731 19:54:17.742521  144924 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:17.743075  144924 main.go:141] libmachine: Using API Version  1
	I0731 19:54:17.743099  144924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:17.743390  144924 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:17.743563  144924 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:54:17.743748  144924 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:54:17.743789  144924 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:54:17.746711  144924 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:54:17.747158  144924 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:54:17.747193  144924 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:54:17.747326  144924 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:54:17.747483  144924 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:54:17.747603  144924 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:54:17.747735  144924 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:54:17.825151  144924 ssh_runner.go:195] Run: systemctl --version
	I0731 19:54:17.832105  144924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:54:17.848218  144924 kubeconfig.go:125] found "ha-235073" server: "https://192.168.39.254:8443"
	I0731 19:54:17.848254  144924 api_server.go:166] Checking apiserver status ...
	I0731 19:54:17.848306  144924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:54:17.870151  144924 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup
	W0731 19:54:17.880485  144924 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 19:54:17.880533  144924 ssh_runner.go:195] Run: ls
	I0731 19:54:17.885026  144924 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 19:54:17.889498  144924 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 19:54:17.889522  144924 status.go:422] ha-235073 apiserver status = Running (err=<nil>)
	I0731 19:54:17.889535  144924 status.go:257] ha-235073 status: &{Name:ha-235073 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 19:54:17.889564  144924 status.go:255] checking status of ha-235073-m02 ...
	I0731 19:54:17.889863  144924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:17.889896  144924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:17.904470  144924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39093
	I0731 19:54:17.904973  144924 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:17.905618  144924 main.go:141] libmachine: Using API Version  1
	I0731 19:54:17.905642  144924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:17.906018  144924 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:17.906209  144924 main.go:141] libmachine: (ha-235073-m02) Calling .GetState
	I0731 19:54:17.907755  144924 status.go:330] ha-235073-m02 host status = "Running" (err=<nil>)
	I0731 19:54:17.907770  144924 host.go:66] Checking if "ha-235073-m02" exists ...
	I0731 19:54:17.908096  144924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:17.908137  144924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:17.922584  144924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45993
	I0731 19:54:17.923002  144924 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:17.923479  144924 main.go:141] libmachine: Using API Version  1
	I0731 19:54:17.923505  144924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:17.923821  144924 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:17.924010  144924 main.go:141] libmachine: (ha-235073-m02) Calling .GetIP
	I0731 19:54:17.927036  144924 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:54:17.927458  144924 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:54:17.927494  144924 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:54:17.927582  144924 host.go:66] Checking if "ha-235073-m02" exists ...
	I0731 19:54:17.927854  144924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:17.927884  144924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:17.942234  144924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40547
	I0731 19:54:17.942726  144924 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:17.943200  144924 main.go:141] libmachine: Using API Version  1
	I0731 19:54:17.943221  144924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:17.943520  144924 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:17.943681  144924 main.go:141] libmachine: (ha-235073-m02) Calling .DriverName
	I0731 19:54:17.943852  144924 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:54:17.943872  144924 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHHostname
	I0731 19:54:17.946786  144924 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:54:17.947294  144924 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:54:17.947318  144924 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:54:17.947458  144924 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHPort
	I0731 19:54:17.947614  144924 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:54:17.947771  144924 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHUsername
	I0731 19:54:17.947919  144924 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02/id_rsa Username:docker}
	W0731 19:54:20.485668  144924 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.102:22: connect: no route to host
	W0731 19:54:20.485765  144924 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	E0731 19:54:20.485801  144924 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	I0731 19:54:20.485811  144924 status.go:257] ha-235073-m02 status: &{Name:ha-235073-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 19:54:20.485835  144924 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	I0731 19:54:20.485848  144924 status.go:255] checking status of ha-235073-m03 ...
	I0731 19:54:20.486292  144924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:20.486358  144924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:20.502407  144924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33085
	I0731 19:54:20.502869  144924 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:20.503335  144924 main.go:141] libmachine: Using API Version  1
	I0731 19:54:20.503361  144924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:20.503651  144924 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:20.503852  144924 main.go:141] libmachine: (ha-235073-m03) Calling .GetState
	I0731 19:54:20.505297  144924 status.go:330] ha-235073-m03 host status = "Running" (err=<nil>)
	I0731 19:54:20.505314  144924 host.go:66] Checking if "ha-235073-m03" exists ...
	I0731 19:54:20.505632  144924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:20.505675  144924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:20.521647  144924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46149
	I0731 19:54:20.522062  144924 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:20.522548  144924 main.go:141] libmachine: Using API Version  1
	I0731 19:54:20.522567  144924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:20.522989  144924 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:20.523182  144924 main.go:141] libmachine: (ha-235073-m03) Calling .GetIP
	I0731 19:54:20.525854  144924 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:54:20.526234  144924 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:54:20.526262  144924 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:54:20.526402  144924 host.go:66] Checking if "ha-235073-m03" exists ...
	I0731 19:54:20.526804  144924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:20.526858  144924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:20.542293  144924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37991
	I0731 19:54:20.542700  144924 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:20.543154  144924 main.go:141] libmachine: Using API Version  1
	I0731 19:54:20.543172  144924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:20.543473  144924 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:20.543695  144924 main.go:141] libmachine: (ha-235073-m03) Calling .DriverName
	I0731 19:54:20.543866  144924 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:54:20.543889  144924 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHHostname
	I0731 19:54:20.546396  144924 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:54:20.546801  144924 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:54:20.546820  144924 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:54:20.546972  144924 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHPort
	I0731 19:54:20.547133  144924 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:54:20.547289  144924 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHUsername
	I0731 19:54:20.547428  144924 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/id_rsa Username:docker}
	I0731 19:54:20.632752  144924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:54:20.647786  144924 kubeconfig.go:125] found "ha-235073" server: "https://192.168.39.254:8443"
	I0731 19:54:20.647811  144924 api_server.go:166] Checking apiserver status ...
	I0731 19:54:20.647847  144924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:54:20.661488  144924 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup
	W0731 19:54:20.671628  144924 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 19:54:20.671681  144924 ssh_runner.go:195] Run: ls
	I0731 19:54:20.676201  144924 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 19:54:20.680300  144924 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 19:54:20.680325  144924 status.go:422] ha-235073-m03 apiserver status = Running (err=<nil>)
	I0731 19:54:20.680335  144924 status.go:257] ha-235073-m03 status: &{Name:ha-235073-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 19:54:20.680351  144924 status.go:255] checking status of ha-235073-m04 ...
	I0731 19:54:20.680632  144924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:20.680668  144924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:20.695400  144924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38469
	I0731 19:54:20.695893  144924 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:20.696343  144924 main.go:141] libmachine: Using API Version  1
	I0731 19:54:20.696362  144924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:20.696645  144924 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:20.696828  144924 main.go:141] libmachine: (ha-235073-m04) Calling .GetState
	I0731 19:54:20.698265  144924 status.go:330] ha-235073-m04 host status = "Running" (err=<nil>)
	I0731 19:54:20.698288  144924 host.go:66] Checking if "ha-235073-m04" exists ...
	I0731 19:54:20.698567  144924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:20.698596  144924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:20.714307  144924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43537
	I0731 19:54:20.714785  144924 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:20.715308  144924 main.go:141] libmachine: Using API Version  1
	I0731 19:54:20.715330  144924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:20.715645  144924 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:20.715831  144924 main.go:141] libmachine: (ha-235073-m04) Calling .GetIP
	I0731 19:54:20.718333  144924 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:54:20.718781  144924 main.go:141] libmachine: (ha-235073-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:7d:83", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:50:58 +0000 UTC Type:0 Mac:52:54:00:cc:7d:83 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-235073-m04 Clientid:01:52:54:00:cc:7d:83}
	I0731 19:54:20.718806  144924 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined IP address 192.168.39.62 and MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:54:20.718940  144924 host.go:66] Checking if "ha-235073-m04" exists ...
	I0731 19:54:20.719323  144924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:20.719385  144924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:20.733773  144924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35579
	I0731 19:54:20.734182  144924 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:20.734572  144924 main.go:141] libmachine: Using API Version  1
	I0731 19:54:20.734591  144924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:20.734905  144924 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:20.735083  144924 main.go:141] libmachine: (ha-235073-m04) Calling .DriverName
	I0731 19:54:20.735263  144924 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:54:20.735281  144924 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHHostname
	I0731 19:54:20.737605  144924 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:54:20.738050  144924 main.go:141] libmachine: (ha-235073-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:7d:83", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:50:58 +0000 UTC Type:0 Mac:52:54:00:cc:7d:83 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-235073-m04 Clientid:01:52:54:00:cc:7d:83}
	I0731 19:54:20.738078  144924 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined IP address 192.168.39.62 and MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:54:20.738235  144924 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHPort
	I0731 19:54:20.738423  144924 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHKeyPath
	I0731 19:54:20.738651  144924 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHUsername
	I0731 19:54:20.738815  144924 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m04/id_rsa Username:docker}
	I0731 19:54:20.820811  144924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:54:20.835764  144924 status.go:257] ha-235073-m04 status: &{Name:ha-235073-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-235073 status -v=7 --alsologtostderr: exit status 3 (2.308603973s)

                                                
                                                
-- stdout --
	ha-235073
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-235073-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-235073-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-235073-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 19:54:21.648471  145025 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:54:21.648754  145025 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:54:21.648763  145025 out.go:304] Setting ErrFile to fd 2...
	I0731 19:54:21.648768  145025 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:54:21.648935  145025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 19:54:21.649105  145025 out.go:298] Setting JSON to false
	I0731 19:54:21.649128  145025 mustload.go:65] Loading cluster: ha-235073
	I0731 19:54:21.649174  145025 notify.go:220] Checking for updates...
	I0731 19:54:21.649652  145025 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:54:21.649675  145025 status.go:255] checking status of ha-235073 ...
	I0731 19:54:21.650133  145025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:21.650173  145025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:21.664982  145025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34047
	I0731 19:54:21.665445  145025 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:21.666120  145025 main.go:141] libmachine: Using API Version  1
	I0731 19:54:21.666143  145025 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:21.666497  145025 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:21.666698  145025 main.go:141] libmachine: (ha-235073) Calling .GetState
	I0731 19:54:21.668304  145025 status.go:330] ha-235073 host status = "Running" (err=<nil>)
	I0731 19:54:21.668330  145025 host.go:66] Checking if "ha-235073" exists ...
	I0731 19:54:21.668660  145025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:21.668703  145025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:21.683830  145025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34353
	I0731 19:54:21.684269  145025 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:21.684805  145025 main.go:141] libmachine: Using API Version  1
	I0731 19:54:21.684824  145025 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:21.685133  145025 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:21.685306  145025 main.go:141] libmachine: (ha-235073) Calling .GetIP
	I0731 19:54:21.687962  145025 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:54:21.688375  145025 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:54:21.688409  145025 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:54:21.688507  145025 host.go:66] Checking if "ha-235073" exists ...
	I0731 19:54:21.688844  145025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:21.688881  145025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:21.703198  145025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45737
	I0731 19:54:21.703673  145025 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:21.704144  145025 main.go:141] libmachine: Using API Version  1
	I0731 19:54:21.704167  145025 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:21.704475  145025 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:21.704678  145025 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:54:21.704899  145025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:54:21.704936  145025 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:54:21.707694  145025 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:54:21.708080  145025 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:54:21.708104  145025 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:54:21.708240  145025 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:54:21.708418  145025 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:54:21.708569  145025 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:54:21.708730  145025 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:54:21.785234  145025 ssh_runner.go:195] Run: systemctl --version
	I0731 19:54:21.792184  145025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:54:21.806343  145025 kubeconfig.go:125] found "ha-235073" server: "https://192.168.39.254:8443"
	I0731 19:54:21.806378  145025 api_server.go:166] Checking apiserver status ...
	I0731 19:54:21.806417  145025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:54:21.820921  145025 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup
	W0731 19:54:21.831105  145025 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 19:54:21.831165  145025 ssh_runner.go:195] Run: ls
	I0731 19:54:21.835863  145025 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 19:54:21.842229  145025 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 19:54:21.842252  145025 status.go:422] ha-235073 apiserver status = Running (err=<nil>)
	I0731 19:54:21.842266  145025 status.go:257] ha-235073 status: &{Name:ha-235073 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 19:54:21.842300  145025 status.go:255] checking status of ha-235073-m02 ...
	I0731 19:54:21.842719  145025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:21.842765  145025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:21.858333  145025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39173
	I0731 19:54:21.858836  145025 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:21.859364  145025 main.go:141] libmachine: Using API Version  1
	I0731 19:54:21.859397  145025 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:21.859741  145025 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:21.859940  145025 main.go:141] libmachine: (ha-235073-m02) Calling .GetState
	I0731 19:54:21.861488  145025 status.go:330] ha-235073-m02 host status = "Running" (err=<nil>)
	I0731 19:54:21.861510  145025 host.go:66] Checking if "ha-235073-m02" exists ...
	I0731 19:54:21.861838  145025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:21.861901  145025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:21.877753  145025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42057
	I0731 19:54:21.878121  145025 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:21.878589  145025 main.go:141] libmachine: Using API Version  1
	I0731 19:54:21.878612  145025 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:21.878959  145025 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:21.879148  145025 main.go:141] libmachine: (ha-235073-m02) Calling .GetIP
	I0731 19:54:21.881776  145025 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:54:21.882154  145025 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:54:21.882180  145025 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:54:21.882313  145025 host.go:66] Checking if "ha-235073-m02" exists ...
	I0731 19:54:21.882713  145025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:21.882757  145025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:21.897790  145025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33207
	I0731 19:54:21.898223  145025 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:21.898633  145025 main.go:141] libmachine: Using API Version  1
	I0731 19:54:21.898654  145025 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:21.898932  145025 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:21.899102  145025 main.go:141] libmachine: (ha-235073-m02) Calling .DriverName
	I0731 19:54:21.899259  145025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:54:21.899276  145025 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHHostname
	I0731 19:54:21.902156  145025 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:54:21.902564  145025 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:54:21.902596  145025 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:54:21.902756  145025 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHPort
	I0731 19:54:21.902940  145025 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:54:21.903077  145025 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHUsername
	I0731 19:54:21.903198  145025 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02/id_rsa Username:docker}
	W0731 19:54:23.557671  145025 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.102:22: connect: no route to host
	W0731 19:54:23.557754  145025 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	E0731 19:54:23.557771  145025 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	I0731 19:54:23.557780  145025 status.go:257] ha-235073-m02 status: &{Name:ha-235073-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 19:54:23.557797  145025 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	I0731 19:54:23.557804  145025 status.go:255] checking status of ha-235073-m03 ...
	I0731 19:54:23.558185  145025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:23.558237  145025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:23.573015  145025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42153
	I0731 19:54:23.573439  145025 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:23.573884  145025 main.go:141] libmachine: Using API Version  1
	I0731 19:54:23.573904  145025 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:23.574195  145025 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:23.574379  145025 main.go:141] libmachine: (ha-235073-m03) Calling .GetState
	I0731 19:54:23.576044  145025 status.go:330] ha-235073-m03 host status = "Running" (err=<nil>)
	I0731 19:54:23.576063  145025 host.go:66] Checking if "ha-235073-m03" exists ...
	I0731 19:54:23.576390  145025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:23.576440  145025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:23.591114  145025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41571
	I0731 19:54:23.591546  145025 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:23.591947  145025 main.go:141] libmachine: Using API Version  1
	I0731 19:54:23.591965  145025 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:23.592272  145025 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:23.592424  145025 main.go:141] libmachine: (ha-235073-m03) Calling .GetIP
	I0731 19:54:23.595057  145025 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:54:23.595404  145025 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:54:23.595456  145025 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:54:23.595631  145025 host.go:66] Checking if "ha-235073-m03" exists ...
	I0731 19:54:23.595947  145025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:23.595989  145025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:23.611363  145025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38209
	I0731 19:54:23.611725  145025 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:23.612293  145025 main.go:141] libmachine: Using API Version  1
	I0731 19:54:23.612321  145025 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:23.612657  145025 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:23.612881  145025 main.go:141] libmachine: (ha-235073-m03) Calling .DriverName
	I0731 19:54:23.613272  145025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:54:23.613295  145025 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHHostname
	I0731 19:54:23.615879  145025 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:54:23.616309  145025 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:54:23.616333  145025 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:54:23.616461  145025 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHPort
	I0731 19:54:23.616643  145025 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:54:23.616762  145025 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHUsername
	I0731 19:54:23.616896  145025 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/id_rsa Username:docker}
	I0731 19:54:23.700815  145025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:54:23.716402  145025 kubeconfig.go:125] found "ha-235073" server: "https://192.168.39.254:8443"
	I0731 19:54:23.716439  145025 api_server.go:166] Checking apiserver status ...
	I0731 19:54:23.716482  145025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:54:23.730928  145025 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup
	W0731 19:54:23.740638  145025 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 19:54:23.740690  145025 ssh_runner.go:195] Run: ls
	I0731 19:54:23.745414  145025 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 19:54:23.751193  145025 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 19:54:23.751215  145025 status.go:422] ha-235073-m03 apiserver status = Running (err=<nil>)
	I0731 19:54:23.751224  145025 status.go:257] ha-235073-m03 status: &{Name:ha-235073-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 19:54:23.751239  145025 status.go:255] checking status of ha-235073-m04 ...
	I0731 19:54:23.751580  145025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:23.751613  145025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:23.766343  145025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46547
	I0731 19:54:23.766856  145025 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:23.767420  145025 main.go:141] libmachine: Using API Version  1
	I0731 19:54:23.767447  145025 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:23.767748  145025 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:23.767932  145025 main.go:141] libmachine: (ha-235073-m04) Calling .GetState
	I0731 19:54:23.769500  145025 status.go:330] ha-235073-m04 host status = "Running" (err=<nil>)
	I0731 19:54:23.769516  145025 host.go:66] Checking if "ha-235073-m04" exists ...
	I0731 19:54:23.769912  145025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:23.769957  145025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:23.784728  145025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34043
	I0731 19:54:23.785187  145025 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:23.785716  145025 main.go:141] libmachine: Using API Version  1
	I0731 19:54:23.785742  145025 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:23.786064  145025 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:23.786245  145025 main.go:141] libmachine: (ha-235073-m04) Calling .GetIP
	I0731 19:54:23.789012  145025 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:54:23.789393  145025 main.go:141] libmachine: (ha-235073-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:7d:83", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:50:58 +0000 UTC Type:0 Mac:52:54:00:cc:7d:83 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-235073-m04 Clientid:01:52:54:00:cc:7d:83}
	I0731 19:54:23.789428  145025 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined IP address 192.168.39.62 and MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:54:23.789542  145025 host.go:66] Checking if "ha-235073-m04" exists ...
	I0731 19:54:23.789889  145025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:23.789938  145025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:23.805084  145025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34815
	I0731 19:54:23.805633  145025 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:23.806235  145025 main.go:141] libmachine: Using API Version  1
	I0731 19:54:23.806269  145025 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:23.806587  145025 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:23.806777  145025 main.go:141] libmachine: (ha-235073-m04) Calling .DriverName
	I0731 19:54:23.806945  145025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:54:23.806966  145025 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHHostname
	I0731 19:54:23.809928  145025 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:54:23.810370  145025 main.go:141] libmachine: (ha-235073-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:7d:83", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:50:58 +0000 UTC Type:0 Mac:52:54:00:cc:7d:83 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-235073-m04 Clientid:01:52:54:00:cc:7d:83}
	I0731 19:54:23.810408  145025 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined IP address 192.168.39.62 and MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:54:23.810559  145025 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHPort
	I0731 19:54:23.810758  145025 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHKeyPath
	I0731 19:54:23.810946  145025 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHUsername
	I0731 19:54:23.811123  145025 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m04/id_rsa Username:docker}
	I0731 19:54:23.897305  145025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:54:23.913427  145025 status.go:257] ha-235073-m04 status: &{Name:ha-235073-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-235073 status -v=7 --alsologtostderr: exit status 3 (4.716390088s)

                                                
                                                
-- stdout --
	ha-235073
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-235073-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-235073-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-235073-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 19:54:25.678498  145125 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:54:25.678628  145125 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:54:25.678637  145125 out.go:304] Setting ErrFile to fd 2...
	I0731 19:54:25.678641  145125 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:54:25.678834  145125 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 19:54:25.679001  145125 out.go:298] Setting JSON to false
	I0731 19:54:25.679028  145125 mustload.go:65] Loading cluster: ha-235073
	I0731 19:54:25.679071  145125 notify.go:220] Checking for updates...
	I0731 19:54:25.679396  145125 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:54:25.679411  145125 status.go:255] checking status of ha-235073 ...
	I0731 19:54:25.679901  145125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:25.679994  145125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:25.698440  145125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42201
	I0731 19:54:25.698937  145125 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:25.699644  145125 main.go:141] libmachine: Using API Version  1
	I0731 19:54:25.699679  145125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:25.700014  145125 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:25.700208  145125 main.go:141] libmachine: (ha-235073) Calling .GetState
	I0731 19:54:25.701820  145125 status.go:330] ha-235073 host status = "Running" (err=<nil>)
	I0731 19:54:25.701849  145125 host.go:66] Checking if "ha-235073" exists ...
	I0731 19:54:25.702123  145125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:25.702156  145125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:25.717187  145125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40417
	I0731 19:54:25.717653  145125 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:25.718139  145125 main.go:141] libmachine: Using API Version  1
	I0731 19:54:25.718170  145125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:25.718508  145125 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:25.718716  145125 main.go:141] libmachine: (ha-235073) Calling .GetIP
	I0731 19:54:25.721799  145125 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:54:25.722224  145125 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:54:25.722248  145125 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:54:25.722396  145125 host.go:66] Checking if "ha-235073" exists ...
	I0731 19:54:25.722694  145125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:25.722742  145125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:25.738183  145125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37363
	I0731 19:54:25.738685  145125 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:25.739177  145125 main.go:141] libmachine: Using API Version  1
	I0731 19:54:25.739202  145125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:25.739491  145125 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:25.739658  145125 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:54:25.739835  145125 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:54:25.739875  145125 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:54:25.742686  145125 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:54:25.743095  145125 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:54:25.743121  145125 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:54:25.743238  145125 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:54:25.743449  145125 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:54:25.743640  145125 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:54:25.743859  145125 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:54:25.833080  145125 ssh_runner.go:195] Run: systemctl --version
	I0731 19:54:25.839423  145125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:54:25.855989  145125 kubeconfig.go:125] found "ha-235073" server: "https://192.168.39.254:8443"
	I0731 19:54:25.856019  145125 api_server.go:166] Checking apiserver status ...
	I0731 19:54:25.856054  145125 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:54:25.873442  145125 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup
	W0731 19:54:25.884505  145125 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 19:54:25.884572  145125 ssh_runner.go:195] Run: ls
	I0731 19:54:25.889215  145125 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 19:54:25.895824  145125 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 19:54:25.895848  145125 status.go:422] ha-235073 apiserver status = Running (err=<nil>)
	I0731 19:54:25.895858  145125 status.go:257] ha-235073 status: &{Name:ha-235073 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 19:54:25.895873  145125 status.go:255] checking status of ha-235073-m02 ...
	I0731 19:54:25.896201  145125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:25.896247  145125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:25.911652  145125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38857
	I0731 19:54:25.912150  145125 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:25.912668  145125 main.go:141] libmachine: Using API Version  1
	I0731 19:54:25.912688  145125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:25.913053  145125 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:25.913257  145125 main.go:141] libmachine: (ha-235073-m02) Calling .GetState
	I0731 19:54:25.914726  145125 status.go:330] ha-235073-m02 host status = "Running" (err=<nil>)
	I0731 19:54:25.914743  145125 host.go:66] Checking if "ha-235073-m02" exists ...
	I0731 19:54:25.915108  145125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:25.915147  145125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:25.930339  145125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38337
	I0731 19:54:25.930819  145125 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:25.931329  145125 main.go:141] libmachine: Using API Version  1
	I0731 19:54:25.931351  145125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:25.931707  145125 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:25.931937  145125 main.go:141] libmachine: (ha-235073-m02) Calling .GetIP
	I0731 19:54:25.934728  145125 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:54:25.935051  145125 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:54:25.935076  145125 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:54:25.935204  145125 host.go:66] Checking if "ha-235073-m02" exists ...
	I0731 19:54:25.935624  145125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:25.935673  145125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:25.952249  145125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42471
	I0731 19:54:25.952710  145125 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:25.953170  145125 main.go:141] libmachine: Using API Version  1
	I0731 19:54:25.953191  145125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:25.953532  145125 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:25.953722  145125 main.go:141] libmachine: (ha-235073-m02) Calling .DriverName
	I0731 19:54:25.953950  145125 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:54:25.953979  145125 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHHostname
	I0731 19:54:25.956676  145125 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:54:25.957084  145125 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:54:25.957116  145125 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:54:25.957271  145125 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHPort
	I0731 19:54:25.957480  145125 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:54:25.957643  145125 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHUsername
	I0731 19:54:25.957792  145125 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02/id_rsa Username:docker}
	W0731 19:54:26.629590  145125 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.102:22: connect: no route to host
	I0731 19:54:26.629637  145125 retry.go:31] will retry after 294.511157ms: dial tcp 192.168.39.102:22: connect: no route to host
	W0731 19:54:29.989663  145125 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.102:22: connect: no route to host
	W0731 19:54:29.989775  145125 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	E0731 19:54:29.989799  145125 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	I0731 19:54:29.989811  145125 status.go:257] ha-235073-m02 status: &{Name:ha-235073-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 19:54:29.989843  145125 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	I0731 19:54:29.989853  145125 status.go:255] checking status of ha-235073-m03 ...
	I0731 19:54:29.990162  145125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:29.990228  145125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:30.005790  145125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33899
	I0731 19:54:30.006294  145125 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:30.006781  145125 main.go:141] libmachine: Using API Version  1
	I0731 19:54:30.006802  145125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:30.007118  145125 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:30.007357  145125 main.go:141] libmachine: (ha-235073-m03) Calling .GetState
	I0731 19:54:30.008904  145125 status.go:330] ha-235073-m03 host status = "Running" (err=<nil>)
	I0731 19:54:30.008924  145125 host.go:66] Checking if "ha-235073-m03" exists ...
	I0731 19:54:30.009351  145125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:30.009398  145125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:30.024587  145125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38617
	I0731 19:54:30.024980  145125 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:30.025460  145125 main.go:141] libmachine: Using API Version  1
	I0731 19:54:30.025488  145125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:30.025799  145125 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:30.026047  145125 main.go:141] libmachine: (ha-235073-m03) Calling .GetIP
	I0731 19:54:30.028678  145125 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:54:30.029068  145125 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:54:30.029082  145125 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:54:30.029276  145125 host.go:66] Checking if "ha-235073-m03" exists ...
	I0731 19:54:30.029658  145125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:30.029726  145125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:30.045426  145125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41137
	I0731 19:54:30.045867  145125 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:30.046390  145125 main.go:141] libmachine: Using API Version  1
	I0731 19:54:30.046420  145125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:30.046698  145125 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:30.046923  145125 main.go:141] libmachine: (ha-235073-m03) Calling .DriverName
	I0731 19:54:30.047205  145125 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:54:30.047228  145125 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHHostname
	I0731 19:54:30.050305  145125 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:54:30.050730  145125 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:54:30.050761  145125 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:54:30.050889  145125 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHPort
	I0731 19:54:30.051089  145125 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:54:30.051228  145125 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHUsername
	I0731 19:54:30.051380  145125 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/id_rsa Username:docker}
	I0731 19:54:30.137092  145125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:54:30.153382  145125 kubeconfig.go:125] found "ha-235073" server: "https://192.168.39.254:8443"
	I0731 19:54:30.153421  145125 api_server.go:166] Checking apiserver status ...
	I0731 19:54:30.153483  145125 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:54:30.167764  145125 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup
	W0731 19:54:30.178108  145125 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 19:54:30.178174  145125 ssh_runner.go:195] Run: ls
	I0731 19:54:30.183225  145125 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 19:54:30.189497  145125 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 19:54:30.189525  145125 status.go:422] ha-235073-m03 apiserver status = Running (err=<nil>)
	I0731 19:54:30.189534  145125 status.go:257] ha-235073-m03 status: &{Name:ha-235073-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 19:54:30.189549  145125 status.go:255] checking status of ha-235073-m04 ...
	I0731 19:54:30.189832  145125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:30.189866  145125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:30.205606  145125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33859
	I0731 19:54:30.206138  145125 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:30.206618  145125 main.go:141] libmachine: Using API Version  1
	I0731 19:54:30.206639  145125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:30.207007  145125 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:30.207240  145125 main.go:141] libmachine: (ha-235073-m04) Calling .GetState
	I0731 19:54:30.209068  145125 status.go:330] ha-235073-m04 host status = "Running" (err=<nil>)
	I0731 19:54:30.209088  145125 host.go:66] Checking if "ha-235073-m04" exists ...
	I0731 19:54:30.209446  145125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:30.209485  145125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:30.224724  145125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40419
	I0731 19:54:30.225240  145125 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:30.225921  145125 main.go:141] libmachine: Using API Version  1
	I0731 19:54:30.225948  145125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:30.226275  145125 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:30.226453  145125 main.go:141] libmachine: (ha-235073-m04) Calling .GetIP
	I0731 19:54:30.229171  145125 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:54:30.229713  145125 main.go:141] libmachine: (ha-235073-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:7d:83", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:50:58 +0000 UTC Type:0 Mac:52:54:00:cc:7d:83 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-235073-m04 Clientid:01:52:54:00:cc:7d:83}
	I0731 19:54:30.229746  145125 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined IP address 192.168.39.62 and MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:54:30.229948  145125 host.go:66] Checking if "ha-235073-m04" exists ...
	I0731 19:54:30.230292  145125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:30.230361  145125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:30.246135  145125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33263
	I0731 19:54:30.246653  145125 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:30.247224  145125 main.go:141] libmachine: Using API Version  1
	I0731 19:54:30.247252  145125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:30.247699  145125 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:30.247945  145125 main.go:141] libmachine: (ha-235073-m04) Calling .DriverName
	I0731 19:54:30.248156  145125 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:54:30.248184  145125 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHHostname
	I0731 19:54:30.251211  145125 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:54:30.251630  145125 main.go:141] libmachine: (ha-235073-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:7d:83", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:50:58 +0000 UTC Type:0 Mac:52:54:00:cc:7d:83 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-235073-m04 Clientid:01:52:54:00:cc:7d:83}
	I0731 19:54:30.251654  145125 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined IP address 192.168.39.62 and MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:54:30.251860  145125 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHPort
	I0731 19:54:30.252067  145125 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHKeyPath
	I0731 19:54:30.252223  145125 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHUsername
	I0731 19:54:30.252370  145125 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m04/id_rsa Username:docker}
	I0731 19:54:30.336961  145125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:54:30.351711  145125 status.go:257] ha-235073-m04 status: &{Name:ha-235073-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-235073 status -v=7 --alsologtostderr: exit status 3 (4.170654333s)

                                                
                                                
-- stdout --
	ha-235073
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-235073-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-235073-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-235073-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 19:54:32.681539  145226 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:54:32.681653  145226 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:54:32.681662  145226 out.go:304] Setting ErrFile to fd 2...
	I0731 19:54:32.681667  145226 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:54:32.681842  145226 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 19:54:32.682042  145226 out.go:298] Setting JSON to false
	I0731 19:54:32.682076  145226 mustload.go:65] Loading cluster: ha-235073
	I0731 19:54:32.682178  145226 notify.go:220] Checking for updates...
	I0731 19:54:32.682466  145226 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:54:32.682482  145226 status.go:255] checking status of ha-235073 ...
	I0731 19:54:32.682850  145226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:32.682906  145226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:32.700491  145226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35923
	I0731 19:54:32.700934  145226 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:32.701551  145226 main.go:141] libmachine: Using API Version  1
	I0731 19:54:32.701576  145226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:32.701914  145226 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:32.702110  145226 main.go:141] libmachine: (ha-235073) Calling .GetState
	I0731 19:54:32.703999  145226 status.go:330] ha-235073 host status = "Running" (err=<nil>)
	I0731 19:54:32.704036  145226 host.go:66] Checking if "ha-235073" exists ...
	I0731 19:54:32.704337  145226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:32.704373  145226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:32.719610  145226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37907
	I0731 19:54:32.720064  145226 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:32.720538  145226 main.go:141] libmachine: Using API Version  1
	I0731 19:54:32.720563  145226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:32.720888  145226 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:32.721086  145226 main.go:141] libmachine: (ha-235073) Calling .GetIP
	I0731 19:54:32.723561  145226 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:54:32.723936  145226 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:54:32.723970  145226 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:54:32.724079  145226 host.go:66] Checking if "ha-235073" exists ...
	I0731 19:54:32.724404  145226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:32.724438  145226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:32.740418  145226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37991
	I0731 19:54:32.740836  145226 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:32.741388  145226 main.go:141] libmachine: Using API Version  1
	I0731 19:54:32.741428  145226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:32.741752  145226 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:32.741942  145226 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:54:32.742120  145226 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:54:32.742159  145226 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:54:32.745036  145226 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:54:32.745433  145226 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:54:32.745468  145226 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:54:32.745595  145226 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:54:32.745766  145226 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:54:32.745949  145226 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:54:32.746085  145226 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:54:32.825545  145226 ssh_runner.go:195] Run: systemctl --version
	I0731 19:54:32.832376  145226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:54:32.849743  145226 kubeconfig.go:125] found "ha-235073" server: "https://192.168.39.254:8443"
	I0731 19:54:32.849773  145226 api_server.go:166] Checking apiserver status ...
	I0731 19:54:32.849809  145226 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:54:32.865851  145226 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup
	W0731 19:54:32.876667  145226 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 19:54:32.876732  145226 ssh_runner.go:195] Run: ls
	I0731 19:54:32.881309  145226 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 19:54:32.885741  145226 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 19:54:32.885766  145226 status.go:422] ha-235073 apiserver status = Running (err=<nil>)
	I0731 19:54:32.885777  145226 status.go:257] ha-235073 status: &{Name:ha-235073 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 19:54:32.885797  145226 status.go:255] checking status of ha-235073-m02 ...
	I0731 19:54:32.886099  145226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:32.886153  145226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:32.901306  145226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38247
	I0731 19:54:32.901861  145226 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:32.902370  145226 main.go:141] libmachine: Using API Version  1
	I0731 19:54:32.902392  145226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:32.902810  145226 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:32.903005  145226 main.go:141] libmachine: (ha-235073-m02) Calling .GetState
	I0731 19:54:32.904786  145226 status.go:330] ha-235073-m02 host status = "Running" (err=<nil>)
	I0731 19:54:32.904805  145226 host.go:66] Checking if "ha-235073-m02" exists ...
	I0731 19:54:32.905142  145226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:32.905200  145226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:32.920742  145226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42381
	I0731 19:54:32.921200  145226 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:32.921727  145226 main.go:141] libmachine: Using API Version  1
	I0731 19:54:32.921748  145226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:32.922087  145226 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:32.922300  145226 main.go:141] libmachine: (ha-235073-m02) Calling .GetIP
	I0731 19:54:32.925125  145226 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:54:32.925578  145226 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:54:32.925599  145226 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:54:32.925719  145226 host.go:66] Checking if "ha-235073-m02" exists ...
	I0731 19:54:32.926138  145226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:32.926186  145226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:32.941933  145226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43011
	I0731 19:54:32.942378  145226 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:32.942887  145226 main.go:141] libmachine: Using API Version  1
	I0731 19:54:32.942906  145226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:32.943223  145226 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:32.943421  145226 main.go:141] libmachine: (ha-235073-m02) Calling .DriverName
	I0731 19:54:32.943623  145226 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:54:32.943645  145226 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHHostname
	I0731 19:54:32.946440  145226 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:54:32.946853  145226 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:54:32.946880  145226 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:54:32.947013  145226 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHPort
	I0731 19:54:32.947182  145226 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:54:32.947333  145226 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHUsername
	I0731 19:54:32.947494  145226 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02/id_rsa Username:docker}
	W0731 19:54:33.061619  145226 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.102:22: connect: no route to host
	I0731 19:54:33.061666  145226 retry.go:31] will retry after 321.015529ms: dial tcp 192.168.39.102:22: connect: no route to host
	W0731 19:54:36.453667  145226 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.102:22: connect: no route to host
	W0731 19:54:36.453759  145226 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	E0731 19:54:36.453774  145226 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	I0731 19:54:36.453781  145226 status.go:257] ha-235073-m02 status: &{Name:ha-235073-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 19:54:36.453801  145226 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	I0731 19:54:36.453808  145226 status.go:255] checking status of ha-235073-m03 ...
	I0731 19:54:36.454113  145226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:36.454161  145226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:36.468934  145226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36959
	I0731 19:54:36.469376  145226 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:36.469856  145226 main.go:141] libmachine: Using API Version  1
	I0731 19:54:36.469878  145226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:36.470209  145226 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:36.470389  145226 main.go:141] libmachine: (ha-235073-m03) Calling .GetState
	I0731 19:54:36.471717  145226 status.go:330] ha-235073-m03 host status = "Running" (err=<nil>)
	I0731 19:54:36.471734  145226 host.go:66] Checking if "ha-235073-m03" exists ...
	I0731 19:54:36.472020  145226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:36.472053  145226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:36.487958  145226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38117
	I0731 19:54:36.488393  145226 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:36.488892  145226 main.go:141] libmachine: Using API Version  1
	I0731 19:54:36.488910  145226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:36.489219  145226 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:36.489419  145226 main.go:141] libmachine: (ha-235073-m03) Calling .GetIP
	I0731 19:54:36.491952  145226 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:54:36.492295  145226 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:54:36.492322  145226 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:54:36.492490  145226 host.go:66] Checking if "ha-235073-m03" exists ...
	I0731 19:54:36.492778  145226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:36.492828  145226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:36.508957  145226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34921
	I0731 19:54:36.509425  145226 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:36.509906  145226 main.go:141] libmachine: Using API Version  1
	I0731 19:54:36.509929  145226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:36.510284  145226 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:36.510472  145226 main.go:141] libmachine: (ha-235073-m03) Calling .DriverName
	I0731 19:54:36.510727  145226 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:54:36.510748  145226 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHHostname
	I0731 19:54:36.513469  145226 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:54:36.514000  145226 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:54:36.514041  145226 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:54:36.514179  145226 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHPort
	I0731 19:54:36.514354  145226 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:54:36.514518  145226 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHUsername
	I0731 19:54:36.514658  145226 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/id_rsa Username:docker}
	I0731 19:54:36.600792  145226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:54:36.616135  145226 kubeconfig.go:125] found "ha-235073" server: "https://192.168.39.254:8443"
	I0731 19:54:36.616167  145226 api_server.go:166] Checking apiserver status ...
	I0731 19:54:36.616209  145226 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:54:36.630455  145226 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup
	W0731 19:54:36.640281  145226 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 19:54:36.640347  145226 ssh_runner.go:195] Run: ls
	I0731 19:54:36.645286  145226 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 19:54:36.651630  145226 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 19:54:36.651654  145226 status.go:422] ha-235073-m03 apiserver status = Running (err=<nil>)
	I0731 19:54:36.651663  145226 status.go:257] ha-235073-m03 status: &{Name:ha-235073-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 19:54:36.651679  145226 status.go:255] checking status of ha-235073-m04 ...
	I0731 19:54:36.651977  145226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:36.652010  145226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:36.666877  145226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40589
	I0731 19:54:36.667325  145226 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:36.667907  145226 main.go:141] libmachine: Using API Version  1
	I0731 19:54:36.667931  145226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:36.668221  145226 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:36.668403  145226 main.go:141] libmachine: (ha-235073-m04) Calling .GetState
	I0731 19:54:36.669947  145226 status.go:330] ha-235073-m04 host status = "Running" (err=<nil>)
	I0731 19:54:36.669967  145226 host.go:66] Checking if "ha-235073-m04" exists ...
	I0731 19:54:36.670247  145226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:36.670283  145226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:36.685320  145226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44029
	I0731 19:54:36.685897  145226 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:36.686385  145226 main.go:141] libmachine: Using API Version  1
	I0731 19:54:36.686405  145226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:36.686729  145226 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:36.686932  145226 main.go:141] libmachine: (ha-235073-m04) Calling .GetIP
	I0731 19:54:36.689675  145226 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:54:36.690112  145226 main.go:141] libmachine: (ha-235073-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:7d:83", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:50:58 +0000 UTC Type:0 Mac:52:54:00:cc:7d:83 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-235073-m04 Clientid:01:52:54:00:cc:7d:83}
	I0731 19:54:36.690162  145226 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined IP address 192.168.39.62 and MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:54:36.690238  145226 host.go:66] Checking if "ha-235073-m04" exists ...
	I0731 19:54:36.690539  145226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:36.690578  145226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:36.706426  145226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33707
	I0731 19:54:36.706915  145226 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:36.707418  145226 main.go:141] libmachine: Using API Version  1
	I0731 19:54:36.707440  145226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:36.707755  145226 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:36.707943  145226 main.go:141] libmachine: (ha-235073-m04) Calling .DriverName
	I0731 19:54:36.708181  145226 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:54:36.708203  145226 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHHostname
	I0731 19:54:36.711198  145226 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:54:36.711615  145226 main.go:141] libmachine: (ha-235073-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:7d:83", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:50:58 +0000 UTC Type:0 Mac:52:54:00:cc:7d:83 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-235073-m04 Clientid:01:52:54:00:cc:7d:83}
	I0731 19:54:36.711635  145226 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined IP address 192.168.39.62 and MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:54:36.711811  145226 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHPort
	I0731 19:54:36.711979  145226 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHKeyPath
	I0731 19:54:36.712128  145226 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHUsername
	I0731 19:54:36.712291  145226 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m04/id_rsa Username:docker}
	I0731 19:54:36.792788  145226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:54:36.807600  145226 status.go:257] ha-235073-m04 status: &{Name:ha-235073-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-235073 status -v=7 --alsologtostderr: exit status 3 (3.929223547s)

                                                
                                                
-- stdout --
	ha-235073
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-235073-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-235073-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-235073-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 19:54:39.197622  145343 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:54:39.197740  145343 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:54:39.197748  145343 out.go:304] Setting ErrFile to fd 2...
	I0731 19:54:39.197752  145343 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:54:39.197938  145343 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 19:54:39.198091  145343 out.go:298] Setting JSON to false
	I0731 19:54:39.198116  145343 mustload.go:65] Loading cluster: ha-235073
	I0731 19:54:39.198221  145343 notify.go:220] Checking for updates...
	I0731 19:54:39.198527  145343 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:54:39.198545  145343 status.go:255] checking status of ha-235073 ...
	I0731 19:54:39.198903  145343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:39.198961  145343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:39.217175  145343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39571
	I0731 19:54:39.217709  145343 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:39.218426  145343 main.go:141] libmachine: Using API Version  1
	I0731 19:54:39.218451  145343 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:39.218883  145343 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:39.219107  145343 main.go:141] libmachine: (ha-235073) Calling .GetState
	I0731 19:54:39.220824  145343 status.go:330] ha-235073 host status = "Running" (err=<nil>)
	I0731 19:54:39.220850  145343 host.go:66] Checking if "ha-235073" exists ...
	I0731 19:54:39.221150  145343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:39.221192  145343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:39.237202  145343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39809
	I0731 19:54:39.237751  145343 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:39.238208  145343 main.go:141] libmachine: Using API Version  1
	I0731 19:54:39.238230  145343 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:39.238544  145343 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:39.238725  145343 main.go:141] libmachine: (ha-235073) Calling .GetIP
	I0731 19:54:39.241446  145343 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:54:39.241827  145343 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:54:39.241853  145343 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:54:39.242012  145343 host.go:66] Checking if "ha-235073" exists ...
	I0731 19:54:39.242331  145343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:39.242371  145343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:39.256932  145343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38507
	I0731 19:54:39.257381  145343 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:39.257899  145343 main.go:141] libmachine: Using API Version  1
	I0731 19:54:39.257922  145343 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:39.258226  145343 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:39.258411  145343 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:54:39.258597  145343 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:54:39.258619  145343 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:54:39.261141  145343 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:54:39.261609  145343 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:54:39.261643  145343 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:54:39.261803  145343 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:54:39.261936  145343 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:54:39.262041  145343 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:54:39.262138  145343 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:54:39.345456  145343 ssh_runner.go:195] Run: systemctl --version
	I0731 19:54:39.352156  145343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:54:39.368061  145343 kubeconfig.go:125] found "ha-235073" server: "https://192.168.39.254:8443"
	I0731 19:54:39.368093  145343 api_server.go:166] Checking apiserver status ...
	I0731 19:54:39.368128  145343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:54:39.386594  145343 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup
	W0731 19:54:39.398746  145343 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 19:54:39.398806  145343 ssh_runner.go:195] Run: ls
	I0731 19:54:39.403241  145343 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 19:54:39.407572  145343 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 19:54:39.407594  145343 status.go:422] ha-235073 apiserver status = Running (err=<nil>)
	I0731 19:54:39.407603  145343 status.go:257] ha-235073 status: &{Name:ha-235073 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 19:54:39.407623  145343 status.go:255] checking status of ha-235073-m02 ...
	I0731 19:54:39.407964  145343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:39.408000  145343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:39.424765  145343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43541
	I0731 19:54:39.425195  145343 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:39.425777  145343 main.go:141] libmachine: Using API Version  1
	I0731 19:54:39.425806  145343 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:39.426150  145343 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:39.426423  145343 main.go:141] libmachine: (ha-235073-m02) Calling .GetState
	I0731 19:54:39.428109  145343 status.go:330] ha-235073-m02 host status = "Running" (err=<nil>)
	I0731 19:54:39.428128  145343 host.go:66] Checking if "ha-235073-m02" exists ...
	I0731 19:54:39.428403  145343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:39.428443  145343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:39.444033  145343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40141
	I0731 19:54:39.444439  145343 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:39.444867  145343 main.go:141] libmachine: Using API Version  1
	I0731 19:54:39.444890  145343 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:39.445181  145343 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:39.445394  145343 main.go:141] libmachine: (ha-235073-m02) Calling .GetIP
	I0731 19:54:39.448020  145343 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:54:39.448432  145343 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:54:39.448477  145343 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:54:39.448652  145343 host.go:66] Checking if "ha-235073-m02" exists ...
	I0731 19:54:39.449043  145343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:39.449097  145343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:39.463867  145343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39199
	I0731 19:54:39.464351  145343 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:39.464889  145343 main.go:141] libmachine: Using API Version  1
	I0731 19:54:39.464912  145343 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:39.465283  145343 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:39.465500  145343 main.go:141] libmachine: (ha-235073-m02) Calling .DriverName
	I0731 19:54:39.465711  145343 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:54:39.465741  145343 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHHostname
	I0731 19:54:39.468544  145343 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:54:39.468991  145343 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:54:39.469030  145343 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:54:39.469196  145343 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHPort
	I0731 19:54:39.469388  145343 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:54:39.469563  145343 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHUsername
	I0731 19:54:39.469708  145343 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02/id_rsa Username:docker}
	W0731 19:54:39.525532  145343 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.102:22: connect: no route to host
	I0731 19:54:39.525582  145343 retry.go:31] will retry after 145.519256ms: dial tcp 192.168.39.102:22: connect: no route to host
	W0731 19:54:42.725608  145343 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.102:22: connect: no route to host
	W0731 19:54:42.725714  145343 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	E0731 19:54:42.725738  145343 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	I0731 19:54:42.725748  145343 status.go:257] ha-235073-m02 status: &{Name:ha-235073-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 19:54:42.725783  145343 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	I0731 19:54:42.725795  145343 status.go:255] checking status of ha-235073-m03 ...
	I0731 19:54:42.726097  145343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:42.726140  145343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:42.741629  145343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46021
	I0731 19:54:42.742107  145343 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:42.742583  145343 main.go:141] libmachine: Using API Version  1
	I0731 19:54:42.742603  145343 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:42.742977  145343 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:42.743177  145343 main.go:141] libmachine: (ha-235073-m03) Calling .GetState
	I0731 19:54:42.744749  145343 status.go:330] ha-235073-m03 host status = "Running" (err=<nil>)
	I0731 19:54:42.744768  145343 host.go:66] Checking if "ha-235073-m03" exists ...
	I0731 19:54:42.745068  145343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:42.745104  145343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:42.760661  145343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33795
	I0731 19:54:42.761113  145343 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:42.761633  145343 main.go:141] libmachine: Using API Version  1
	I0731 19:54:42.761657  145343 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:42.761995  145343 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:42.762204  145343 main.go:141] libmachine: (ha-235073-m03) Calling .GetIP
	I0731 19:54:42.764998  145343 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:54:42.765380  145343 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:54:42.765402  145343 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:54:42.765545  145343 host.go:66] Checking if "ha-235073-m03" exists ...
	I0731 19:54:42.765882  145343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:42.765926  145343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:42.782321  145343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40473
	I0731 19:54:42.782846  145343 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:42.783314  145343 main.go:141] libmachine: Using API Version  1
	I0731 19:54:42.783334  145343 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:42.783657  145343 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:42.783829  145343 main.go:141] libmachine: (ha-235073-m03) Calling .DriverName
	I0731 19:54:42.783978  145343 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:54:42.783994  145343 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHHostname
	I0731 19:54:42.786624  145343 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:54:42.787041  145343 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:54:42.787074  145343 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:54:42.787189  145343 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHPort
	I0731 19:54:42.787373  145343 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:54:42.787550  145343 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHUsername
	I0731 19:54:42.787722  145343 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/id_rsa Username:docker}
	I0731 19:54:42.872778  145343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:54:42.888444  145343 kubeconfig.go:125] found "ha-235073" server: "https://192.168.39.254:8443"
	I0731 19:54:42.888476  145343 api_server.go:166] Checking apiserver status ...
	I0731 19:54:42.888517  145343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:54:42.903110  145343 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup
	W0731 19:54:42.914585  145343 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 19:54:42.914648  145343 ssh_runner.go:195] Run: ls
	I0731 19:54:42.919539  145343 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 19:54:42.924361  145343 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 19:54:42.924393  145343 status.go:422] ha-235073-m03 apiserver status = Running (err=<nil>)
	I0731 19:54:42.924405  145343 status.go:257] ha-235073-m03 status: &{Name:ha-235073-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 19:54:42.924459  145343 status.go:255] checking status of ha-235073-m04 ...
	I0731 19:54:42.924882  145343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:42.924925  145343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:42.941581  145343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35193
	I0731 19:54:42.942044  145343 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:42.942587  145343 main.go:141] libmachine: Using API Version  1
	I0731 19:54:42.942611  145343 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:42.942951  145343 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:42.943164  145343 main.go:141] libmachine: (ha-235073-m04) Calling .GetState
	I0731 19:54:42.944836  145343 status.go:330] ha-235073-m04 host status = "Running" (err=<nil>)
	I0731 19:54:42.944853  145343 host.go:66] Checking if "ha-235073-m04" exists ...
	I0731 19:54:42.945198  145343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:42.945245  145343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:42.960639  145343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42629
	I0731 19:54:42.961100  145343 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:42.961639  145343 main.go:141] libmachine: Using API Version  1
	I0731 19:54:42.961659  145343 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:42.961986  145343 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:42.962189  145343 main.go:141] libmachine: (ha-235073-m04) Calling .GetIP
	I0731 19:54:42.964859  145343 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:54:42.965260  145343 main.go:141] libmachine: (ha-235073-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:7d:83", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:50:58 +0000 UTC Type:0 Mac:52:54:00:cc:7d:83 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-235073-m04 Clientid:01:52:54:00:cc:7d:83}
	I0731 19:54:42.965299  145343 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined IP address 192.168.39.62 and MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:54:42.965425  145343 host.go:66] Checking if "ha-235073-m04" exists ...
	I0731 19:54:42.965818  145343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:42.965873  145343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:42.980848  145343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42947
	I0731 19:54:42.981354  145343 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:42.981879  145343 main.go:141] libmachine: Using API Version  1
	I0731 19:54:42.981898  145343 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:42.982203  145343 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:42.982358  145343 main.go:141] libmachine: (ha-235073-m04) Calling .DriverName
	I0731 19:54:42.982532  145343 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:54:42.982551  145343 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHHostname
	I0731 19:54:42.985047  145343 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:54:42.985526  145343 main.go:141] libmachine: (ha-235073-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:7d:83", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:50:58 +0000 UTC Type:0 Mac:52:54:00:cc:7d:83 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-235073-m04 Clientid:01:52:54:00:cc:7d:83}
	I0731 19:54:42.985564  145343 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined IP address 192.168.39.62 and MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:54:42.985731  145343 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHPort
	I0731 19:54:42.985920  145343 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHKeyPath
	I0731 19:54:42.986077  145343 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHUsername
	I0731 19:54:42.986211  145343 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m04/id_rsa Username:docker}
	I0731 19:54:43.069138  145343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:54:43.083413  145343 status.go:257] ha-235073-m04 status: &{Name:ha-235073-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-235073 status -v=7 --alsologtostderr: exit status 3 (3.734096604s)

                                                
                                                
-- stdout --
	ha-235073
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-235073-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-235073-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-235073-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 19:54:49.461873  145461 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:54:49.462116  145461 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:54:49.462126  145461 out.go:304] Setting ErrFile to fd 2...
	I0731 19:54:49.462130  145461 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:54:49.462334  145461 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 19:54:49.462498  145461 out.go:298] Setting JSON to false
	I0731 19:54:49.462526  145461 mustload.go:65] Loading cluster: ha-235073
	I0731 19:54:49.462565  145461 notify.go:220] Checking for updates...
	I0731 19:54:49.463045  145461 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:54:49.463068  145461 status.go:255] checking status of ha-235073 ...
	I0731 19:54:49.463584  145461 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:49.463657  145461 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:49.483964  145461 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36821
	I0731 19:54:49.484374  145461 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:49.484983  145461 main.go:141] libmachine: Using API Version  1
	I0731 19:54:49.485003  145461 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:49.485426  145461 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:49.485689  145461 main.go:141] libmachine: (ha-235073) Calling .GetState
	I0731 19:54:49.487486  145461 status.go:330] ha-235073 host status = "Running" (err=<nil>)
	I0731 19:54:49.487513  145461 host.go:66] Checking if "ha-235073" exists ...
	I0731 19:54:49.487855  145461 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:49.487897  145461 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:49.502498  145461 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42217
	I0731 19:54:49.502943  145461 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:49.503439  145461 main.go:141] libmachine: Using API Version  1
	I0731 19:54:49.503467  145461 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:49.503795  145461 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:49.504006  145461 main.go:141] libmachine: (ha-235073) Calling .GetIP
	I0731 19:54:49.507243  145461 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:54:49.507606  145461 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:54:49.507656  145461 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:54:49.507766  145461 host.go:66] Checking if "ha-235073" exists ...
	I0731 19:54:49.508133  145461 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:49.508173  145461 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:49.524137  145461 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41259
	I0731 19:54:49.524619  145461 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:49.525060  145461 main.go:141] libmachine: Using API Version  1
	I0731 19:54:49.525079  145461 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:49.525422  145461 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:49.525592  145461 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:54:49.525808  145461 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:54:49.525840  145461 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:54:49.528432  145461 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:54:49.528804  145461 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:54:49.528835  145461 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:54:49.528997  145461 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:54:49.529167  145461 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:54:49.529326  145461 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:54:49.529473  145461 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:54:49.605957  145461 ssh_runner.go:195] Run: systemctl --version
	I0731 19:54:49.614993  145461 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:54:49.630171  145461 kubeconfig.go:125] found "ha-235073" server: "https://192.168.39.254:8443"
	I0731 19:54:49.630202  145461 api_server.go:166] Checking apiserver status ...
	I0731 19:54:49.630244  145461 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:54:49.647231  145461 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup
	W0731 19:54:49.658014  145461 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 19:54:49.658059  145461 ssh_runner.go:195] Run: ls
	I0731 19:54:49.662279  145461 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 19:54:49.666297  145461 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 19:54:49.666320  145461 status.go:422] ha-235073 apiserver status = Running (err=<nil>)
	I0731 19:54:49.666332  145461 status.go:257] ha-235073 status: &{Name:ha-235073 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 19:54:49.666349  145461 status.go:255] checking status of ha-235073-m02 ...
	I0731 19:54:49.666674  145461 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:49.666708  145461 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:49.681875  145461 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33161
	I0731 19:54:49.682318  145461 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:49.682785  145461 main.go:141] libmachine: Using API Version  1
	I0731 19:54:49.682805  145461 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:49.683146  145461 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:49.683330  145461 main.go:141] libmachine: (ha-235073-m02) Calling .GetState
	I0731 19:54:49.684831  145461 status.go:330] ha-235073-m02 host status = "Running" (err=<nil>)
	I0731 19:54:49.684846  145461 host.go:66] Checking if "ha-235073-m02" exists ...
	I0731 19:54:49.685130  145461 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:49.685174  145461 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:49.700054  145461 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38743
	I0731 19:54:49.700474  145461 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:49.700994  145461 main.go:141] libmachine: Using API Version  1
	I0731 19:54:49.701014  145461 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:49.701321  145461 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:49.701517  145461 main.go:141] libmachine: (ha-235073-m02) Calling .GetIP
	I0731 19:54:49.704239  145461 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:54:49.704607  145461 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:54:49.704631  145461 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:54:49.704754  145461 host.go:66] Checking if "ha-235073-m02" exists ...
	I0731 19:54:49.705049  145461 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:49.705081  145461 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:49.720075  145461 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37851
	I0731 19:54:49.720484  145461 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:49.720927  145461 main.go:141] libmachine: Using API Version  1
	I0731 19:54:49.720946  145461 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:49.721259  145461 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:49.721465  145461 main.go:141] libmachine: (ha-235073-m02) Calling .DriverName
	I0731 19:54:49.721626  145461 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:54:49.721649  145461 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHHostname
	I0731 19:54:49.724068  145461 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:54:49.724482  145461 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:54:49.724504  145461 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:54:49.724666  145461 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHPort
	I0731 19:54:49.724833  145461 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:54:49.725006  145461 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHUsername
	I0731 19:54:49.725168  145461 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02/id_rsa Username:docker}
	W0731 19:54:52.805582  145461 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.102:22: connect: no route to host
	W0731 19:54:52.805681  145461 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	E0731 19:54:52.805697  145461 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	I0731 19:54:52.805706  145461 status.go:257] ha-235073-m02 status: &{Name:ha-235073-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 19:54:52.805723  145461 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	I0731 19:54:52.805730  145461 status.go:255] checking status of ha-235073-m03 ...
	I0731 19:54:52.806034  145461 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:52.806074  145461 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:52.820524  145461 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34971
	I0731 19:54:52.820981  145461 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:52.821498  145461 main.go:141] libmachine: Using API Version  1
	I0731 19:54:52.821524  145461 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:52.821839  145461 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:52.822017  145461 main.go:141] libmachine: (ha-235073-m03) Calling .GetState
	I0731 19:54:52.823477  145461 status.go:330] ha-235073-m03 host status = "Running" (err=<nil>)
	I0731 19:54:52.823494  145461 host.go:66] Checking if "ha-235073-m03" exists ...
	I0731 19:54:52.823809  145461 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:52.823853  145461 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:52.838649  145461 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34857
	I0731 19:54:52.839040  145461 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:52.839510  145461 main.go:141] libmachine: Using API Version  1
	I0731 19:54:52.839537  145461 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:52.839901  145461 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:52.840125  145461 main.go:141] libmachine: (ha-235073-m03) Calling .GetIP
	I0731 19:54:52.842863  145461 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:54:52.843261  145461 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:54:52.843287  145461 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:54:52.843399  145461 host.go:66] Checking if "ha-235073-m03" exists ...
	I0731 19:54:52.843704  145461 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:52.843738  145461 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:52.858047  145461 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37843
	I0731 19:54:52.858439  145461 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:52.858872  145461 main.go:141] libmachine: Using API Version  1
	I0731 19:54:52.858898  145461 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:52.859165  145461 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:52.859356  145461 main.go:141] libmachine: (ha-235073-m03) Calling .DriverName
	I0731 19:54:52.859548  145461 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:54:52.859566  145461 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHHostname
	I0731 19:54:52.862220  145461 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:54:52.862594  145461 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:54:52.862617  145461 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:54:52.862832  145461 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHPort
	I0731 19:54:52.863015  145461 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:54:52.863160  145461 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHUsername
	I0731 19:54:52.863302  145461 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/id_rsa Username:docker}
	I0731 19:54:52.948996  145461 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:54:52.964207  145461 kubeconfig.go:125] found "ha-235073" server: "https://192.168.39.254:8443"
	I0731 19:54:52.964239  145461 api_server.go:166] Checking apiserver status ...
	I0731 19:54:52.964284  145461 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:54:52.977482  145461 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup
	W0731 19:54:52.987337  145461 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 19:54:52.987383  145461 ssh_runner.go:195] Run: ls
	I0731 19:54:52.992088  145461 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 19:54:52.997670  145461 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 19:54:52.997688  145461 status.go:422] ha-235073-m03 apiserver status = Running (err=<nil>)
	I0731 19:54:52.997697  145461 status.go:257] ha-235073-m03 status: &{Name:ha-235073-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 19:54:52.997712  145461 status.go:255] checking status of ha-235073-m04 ...
	I0731 19:54:52.997981  145461 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:52.998013  145461 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:53.012822  145461 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34997
	I0731 19:54:53.013211  145461 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:53.013771  145461 main.go:141] libmachine: Using API Version  1
	I0731 19:54:53.013790  145461 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:53.014122  145461 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:53.014318  145461 main.go:141] libmachine: (ha-235073-m04) Calling .GetState
	I0731 19:54:53.015682  145461 status.go:330] ha-235073-m04 host status = "Running" (err=<nil>)
	I0731 19:54:53.015709  145461 host.go:66] Checking if "ha-235073-m04" exists ...
	I0731 19:54:53.016092  145461 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:53.016139  145461 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:53.030638  145461 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43069
	I0731 19:54:53.031014  145461 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:53.031454  145461 main.go:141] libmachine: Using API Version  1
	I0731 19:54:53.031473  145461 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:53.031765  145461 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:53.031953  145461 main.go:141] libmachine: (ha-235073-m04) Calling .GetIP
	I0731 19:54:53.034756  145461 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:54:53.035159  145461 main.go:141] libmachine: (ha-235073-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:7d:83", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:50:58 +0000 UTC Type:0 Mac:52:54:00:cc:7d:83 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-235073-m04 Clientid:01:52:54:00:cc:7d:83}
	I0731 19:54:53.035183  145461 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined IP address 192.168.39.62 and MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:54:53.035317  145461 host.go:66] Checking if "ha-235073-m04" exists ...
	I0731 19:54:53.035622  145461 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:54:53.035666  145461 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:54:53.050282  145461 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38525
	I0731 19:54:53.050706  145461 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:54:53.051152  145461 main.go:141] libmachine: Using API Version  1
	I0731 19:54:53.051182  145461 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:54:53.051476  145461 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:54:53.051612  145461 main.go:141] libmachine: (ha-235073-m04) Calling .DriverName
	I0731 19:54:53.051774  145461 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:54:53.051796  145461 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHHostname
	I0731 19:54:53.054514  145461 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:54:53.054891  145461 main.go:141] libmachine: (ha-235073-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:7d:83", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:50:58 +0000 UTC Type:0 Mac:52:54:00:cc:7d:83 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-235073-m04 Clientid:01:52:54:00:cc:7d:83}
	I0731 19:54:53.054923  145461 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined IP address 192.168.39.62 and MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:54:53.055060  145461 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHPort
	I0731 19:54:53.055223  145461 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHKeyPath
	I0731 19:54:53.055391  145461 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHUsername
	I0731 19:54:53.055548  145461 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m04/id_rsa Username:docker}
	I0731 19:54:53.136935  145461 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:54:53.151035  145461 status.go:257] ha-235073-m04 status: &{Name:ha-235073-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-235073 status -v=7 --alsologtostderr: exit status 7 (617.270489ms)

                                                
                                                
-- stdout --
	ha-235073
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-235073-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-235073-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-235073-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 19:55:03.177312  145597 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:55:03.177452  145597 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:55:03.177469  145597 out.go:304] Setting ErrFile to fd 2...
	I0731 19:55:03.177477  145597 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:55:03.177674  145597 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 19:55:03.177836  145597 out.go:298] Setting JSON to false
	I0731 19:55:03.177862  145597 mustload.go:65] Loading cluster: ha-235073
	I0731 19:55:03.177898  145597 notify.go:220] Checking for updates...
	I0731 19:55:03.178208  145597 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:55:03.178222  145597 status.go:255] checking status of ha-235073 ...
	I0731 19:55:03.178642  145597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:55:03.178692  145597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:55:03.193985  145597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45271
	I0731 19:55:03.194414  145597 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:55:03.194975  145597 main.go:141] libmachine: Using API Version  1
	I0731 19:55:03.194996  145597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:55:03.195387  145597 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:55:03.195598  145597 main.go:141] libmachine: (ha-235073) Calling .GetState
	I0731 19:55:03.197392  145597 status.go:330] ha-235073 host status = "Running" (err=<nil>)
	I0731 19:55:03.197419  145597 host.go:66] Checking if "ha-235073" exists ...
	I0731 19:55:03.197724  145597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:55:03.197762  145597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:55:03.212890  145597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32851
	I0731 19:55:03.213330  145597 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:55:03.213834  145597 main.go:141] libmachine: Using API Version  1
	I0731 19:55:03.213854  145597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:55:03.214133  145597 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:55:03.214308  145597 main.go:141] libmachine: (ha-235073) Calling .GetIP
	I0731 19:55:03.217482  145597 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:55:03.217939  145597 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:55:03.217977  145597 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:55:03.218140  145597 host.go:66] Checking if "ha-235073" exists ...
	I0731 19:55:03.218508  145597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:55:03.218562  145597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:55:03.233073  145597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41805
	I0731 19:55:03.233551  145597 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:55:03.234043  145597 main.go:141] libmachine: Using API Version  1
	I0731 19:55:03.234067  145597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:55:03.234318  145597 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:55:03.234582  145597 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:55:03.234815  145597 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:55:03.234845  145597 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:55:03.237896  145597 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:55:03.238299  145597 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:55:03.238322  145597 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:55:03.238422  145597 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:55:03.238574  145597 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:55:03.238716  145597 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:55:03.238862  145597 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:55:03.317462  145597 ssh_runner.go:195] Run: systemctl --version
	I0731 19:55:03.323825  145597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:55:03.340373  145597 kubeconfig.go:125] found "ha-235073" server: "https://192.168.39.254:8443"
	I0731 19:55:03.340408  145597 api_server.go:166] Checking apiserver status ...
	I0731 19:55:03.340462  145597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:55:03.354398  145597 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup
	W0731 19:55:03.363978  145597 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 19:55:03.364033  145597 ssh_runner.go:195] Run: ls
	I0731 19:55:03.368782  145597 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 19:55:03.373591  145597 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 19:55:03.373617  145597 status.go:422] ha-235073 apiserver status = Running (err=<nil>)
	I0731 19:55:03.373630  145597 status.go:257] ha-235073 status: &{Name:ha-235073 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 19:55:03.373651  145597 status.go:255] checking status of ha-235073-m02 ...
	I0731 19:55:03.374039  145597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:55:03.374082  145597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:55:03.389235  145597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45177
	I0731 19:55:03.389610  145597 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:55:03.390059  145597 main.go:141] libmachine: Using API Version  1
	I0731 19:55:03.390078  145597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:55:03.390444  145597 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:55:03.390642  145597 main.go:141] libmachine: (ha-235073-m02) Calling .GetState
	I0731 19:55:03.392289  145597 status.go:330] ha-235073-m02 host status = "Stopped" (err=<nil>)
	I0731 19:55:03.392304  145597 status.go:343] host is not running, skipping remaining checks
	I0731 19:55:03.392310  145597 status.go:257] ha-235073-m02 status: &{Name:ha-235073-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 19:55:03.392325  145597 status.go:255] checking status of ha-235073-m03 ...
	I0731 19:55:03.392632  145597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:55:03.392676  145597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:55:03.408167  145597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33177
	I0731 19:55:03.408621  145597 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:55:03.409119  145597 main.go:141] libmachine: Using API Version  1
	I0731 19:55:03.409145  145597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:55:03.409509  145597 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:55:03.409715  145597 main.go:141] libmachine: (ha-235073-m03) Calling .GetState
	I0731 19:55:03.411245  145597 status.go:330] ha-235073-m03 host status = "Running" (err=<nil>)
	I0731 19:55:03.411260  145597 host.go:66] Checking if "ha-235073-m03" exists ...
	I0731 19:55:03.411571  145597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:55:03.411615  145597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:55:03.425979  145597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42981
	I0731 19:55:03.426401  145597 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:55:03.426861  145597 main.go:141] libmachine: Using API Version  1
	I0731 19:55:03.426880  145597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:55:03.427175  145597 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:55:03.427363  145597 main.go:141] libmachine: (ha-235073-m03) Calling .GetIP
	I0731 19:55:03.429912  145597 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:55:03.430348  145597 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:55:03.430368  145597 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:55:03.430484  145597 host.go:66] Checking if "ha-235073-m03" exists ...
	I0731 19:55:03.430774  145597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:55:03.430810  145597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:55:03.445080  145597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37431
	I0731 19:55:03.445574  145597 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:55:03.446069  145597 main.go:141] libmachine: Using API Version  1
	I0731 19:55:03.446089  145597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:55:03.446341  145597 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:55:03.446541  145597 main.go:141] libmachine: (ha-235073-m03) Calling .DriverName
	I0731 19:55:03.446721  145597 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:55:03.446745  145597 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHHostname
	I0731 19:55:03.449368  145597 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:55:03.449748  145597 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:55:03.449771  145597 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:55:03.449898  145597 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHPort
	I0731 19:55:03.450076  145597 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:55:03.450244  145597 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHUsername
	I0731 19:55:03.450359  145597 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/id_rsa Username:docker}
	I0731 19:55:03.541023  145597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:55:03.556153  145597 kubeconfig.go:125] found "ha-235073" server: "https://192.168.39.254:8443"
	I0731 19:55:03.556190  145597 api_server.go:166] Checking apiserver status ...
	I0731 19:55:03.556232  145597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:55:03.572584  145597 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup
	W0731 19:55:03.582282  145597 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 19:55:03.582332  145597 ssh_runner.go:195] Run: ls
	I0731 19:55:03.586518  145597 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 19:55:03.591161  145597 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 19:55:03.591182  145597 status.go:422] ha-235073-m03 apiserver status = Running (err=<nil>)
	I0731 19:55:03.591191  145597 status.go:257] ha-235073-m03 status: &{Name:ha-235073-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 19:55:03.591206  145597 status.go:255] checking status of ha-235073-m04 ...
	I0731 19:55:03.591531  145597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:55:03.591577  145597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:55:03.606924  145597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42797
	I0731 19:55:03.607331  145597 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:55:03.607796  145597 main.go:141] libmachine: Using API Version  1
	I0731 19:55:03.607820  145597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:55:03.608118  145597 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:55:03.608315  145597 main.go:141] libmachine: (ha-235073-m04) Calling .GetState
	I0731 19:55:03.609977  145597 status.go:330] ha-235073-m04 host status = "Running" (err=<nil>)
	I0731 19:55:03.609998  145597 host.go:66] Checking if "ha-235073-m04" exists ...
	I0731 19:55:03.610402  145597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:55:03.610457  145597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:55:03.626865  145597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37183
	I0731 19:55:03.627337  145597 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:55:03.627877  145597 main.go:141] libmachine: Using API Version  1
	I0731 19:55:03.627898  145597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:55:03.628251  145597 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:55:03.628453  145597 main.go:141] libmachine: (ha-235073-m04) Calling .GetIP
	I0731 19:55:03.631317  145597 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:55:03.631824  145597 main.go:141] libmachine: (ha-235073-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:7d:83", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:50:58 +0000 UTC Type:0 Mac:52:54:00:cc:7d:83 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-235073-m04 Clientid:01:52:54:00:cc:7d:83}
	I0731 19:55:03.631860  145597 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined IP address 192.168.39.62 and MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:55:03.632014  145597 host.go:66] Checking if "ha-235073-m04" exists ...
	I0731 19:55:03.632309  145597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:55:03.632351  145597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:55:03.646932  145597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33683
	I0731 19:55:03.647369  145597 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:55:03.647784  145597 main.go:141] libmachine: Using API Version  1
	I0731 19:55:03.647801  145597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:55:03.648139  145597 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:55:03.648317  145597 main.go:141] libmachine: (ha-235073-m04) Calling .DriverName
	I0731 19:55:03.648511  145597 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:55:03.648534  145597 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHHostname
	I0731 19:55:03.651181  145597 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:55:03.651594  145597 main.go:141] libmachine: (ha-235073-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:7d:83", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:50:58 +0000 UTC Type:0 Mac:52:54:00:cc:7d:83 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-235073-m04 Clientid:01:52:54:00:cc:7d:83}
	I0731 19:55:03.651614  145597 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined IP address 192.168.39.62 and MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:55:03.651736  145597 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHPort
	I0731 19:55:03.651888  145597 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHKeyPath
	I0731 19:55:03.652053  145597 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHUsername
	I0731 19:55:03.652148  145597 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m04/id_rsa Username:docker}
	I0731 19:55:03.733545  145597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:55:03.749684  145597 status.go:257] ha-235073-m04 status: &{Name:ha-235073-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0731 19:55:09.825690  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-235073 status -v=7 --alsologtostderr: exit status 7 (614.790993ms)

                                                
                                                
-- stdout --
	ha-235073
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-235073-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-235073-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-235073-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 19:55:12.985554  145701 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:55:12.985778  145701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:55:12.985784  145701 out.go:304] Setting ErrFile to fd 2...
	I0731 19:55:12.985788  145701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:55:12.985969  145701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 19:55:12.986120  145701 out.go:298] Setting JSON to false
	I0731 19:55:12.986145  145701 mustload.go:65] Loading cluster: ha-235073
	I0731 19:55:12.986233  145701 notify.go:220] Checking for updates...
	I0731 19:55:12.986506  145701 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:55:12.986521  145701 status.go:255] checking status of ha-235073 ...
	I0731 19:55:12.986866  145701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:55:12.986929  145701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:55:13.006102  145701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46353
	I0731 19:55:13.006598  145701 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:55:13.007161  145701 main.go:141] libmachine: Using API Version  1
	I0731 19:55:13.007183  145701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:55:13.007575  145701 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:55:13.007778  145701 main.go:141] libmachine: (ha-235073) Calling .GetState
	I0731 19:55:13.009386  145701 status.go:330] ha-235073 host status = "Running" (err=<nil>)
	I0731 19:55:13.009416  145701 host.go:66] Checking if "ha-235073" exists ...
	I0731 19:55:13.009718  145701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:55:13.009752  145701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:55:13.024243  145701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44439
	I0731 19:55:13.024716  145701 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:55:13.025149  145701 main.go:141] libmachine: Using API Version  1
	I0731 19:55:13.025171  145701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:55:13.025514  145701 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:55:13.025659  145701 main.go:141] libmachine: (ha-235073) Calling .GetIP
	I0731 19:55:13.028180  145701 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:55:13.028616  145701 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:55:13.028650  145701 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:55:13.028779  145701 host.go:66] Checking if "ha-235073" exists ...
	I0731 19:55:13.029063  145701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:55:13.029101  145701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:55:13.043784  145701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43989
	I0731 19:55:13.044328  145701 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:55:13.044872  145701 main.go:141] libmachine: Using API Version  1
	I0731 19:55:13.044894  145701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:55:13.045185  145701 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:55:13.045369  145701 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:55:13.045566  145701 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:55:13.045591  145701 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:55:13.048129  145701 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:55:13.048503  145701 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:55:13.048532  145701 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:55:13.048670  145701 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:55:13.048854  145701 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:55:13.048983  145701 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:55:13.049129  145701 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:55:13.129062  145701 ssh_runner.go:195] Run: systemctl --version
	I0731 19:55:13.135109  145701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:55:13.149908  145701 kubeconfig.go:125] found "ha-235073" server: "https://192.168.39.254:8443"
	I0731 19:55:13.149938  145701 api_server.go:166] Checking apiserver status ...
	I0731 19:55:13.149981  145701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:55:13.164511  145701 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup
	W0731 19:55:13.176345  145701 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 19:55:13.176401  145701 ssh_runner.go:195] Run: ls
	I0731 19:55:13.180640  145701 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 19:55:13.185988  145701 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 19:55:13.186009  145701 status.go:422] ha-235073 apiserver status = Running (err=<nil>)
	I0731 19:55:13.186019  145701 status.go:257] ha-235073 status: &{Name:ha-235073 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 19:55:13.186034  145701 status.go:255] checking status of ha-235073-m02 ...
	I0731 19:55:13.186312  145701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:55:13.186344  145701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:55:13.202067  145701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38509
	I0731 19:55:13.202606  145701 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:55:13.203098  145701 main.go:141] libmachine: Using API Version  1
	I0731 19:55:13.203123  145701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:55:13.203493  145701 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:55:13.203717  145701 main.go:141] libmachine: (ha-235073-m02) Calling .GetState
	I0731 19:55:13.205375  145701 status.go:330] ha-235073-m02 host status = "Stopped" (err=<nil>)
	I0731 19:55:13.205392  145701 status.go:343] host is not running, skipping remaining checks
	I0731 19:55:13.205400  145701 status.go:257] ha-235073-m02 status: &{Name:ha-235073-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 19:55:13.205420  145701 status.go:255] checking status of ha-235073-m03 ...
	I0731 19:55:13.205706  145701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:55:13.205739  145701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:55:13.219772  145701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43593
	I0731 19:55:13.220190  145701 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:55:13.220634  145701 main.go:141] libmachine: Using API Version  1
	I0731 19:55:13.220657  145701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:55:13.220934  145701 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:55:13.221134  145701 main.go:141] libmachine: (ha-235073-m03) Calling .GetState
	I0731 19:55:13.222533  145701 status.go:330] ha-235073-m03 host status = "Running" (err=<nil>)
	I0731 19:55:13.222553  145701 host.go:66] Checking if "ha-235073-m03" exists ...
	I0731 19:55:13.222884  145701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:55:13.222941  145701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:55:13.238231  145701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36503
	I0731 19:55:13.238614  145701 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:55:13.239052  145701 main.go:141] libmachine: Using API Version  1
	I0731 19:55:13.239073  145701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:55:13.239420  145701 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:55:13.239605  145701 main.go:141] libmachine: (ha-235073-m03) Calling .GetIP
	I0731 19:55:13.242492  145701 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:55:13.242939  145701 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:55:13.242961  145701 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:55:13.243118  145701 host.go:66] Checking if "ha-235073-m03" exists ...
	I0731 19:55:13.243425  145701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:55:13.243475  145701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:55:13.258621  145701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43483
	I0731 19:55:13.259186  145701 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:55:13.259690  145701 main.go:141] libmachine: Using API Version  1
	I0731 19:55:13.259719  145701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:55:13.260006  145701 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:55:13.260216  145701 main.go:141] libmachine: (ha-235073-m03) Calling .DriverName
	I0731 19:55:13.260385  145701 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:55:13.260401  145701 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHHostname
	I0731 19:55:13.262980  145701 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:55:13.263441  145701 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:55:13.263470  145701 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:55:13.263613  145701 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHPort
	I0731 19:55:13.263777  145701 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:55:13.263945  145701 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHUsername
	I0731 19:55:13.264162  145701 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/id_rsa Username:docker}
	I0731 19:55:13.351222  145701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:55:13.367158  145701 kubeconfig.go:125] found "ha-235073" server: "https://192.168.39.254:8443"
	I0731 19:55:13.367197  145701 api_server.go:166] Checking apiserver status ...
	I0731 19:55:13.367250  145701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:55:13.380278  145701 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup
	W0731 19:55:13.389183  145701 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 19:55:13.389249  145701 ssh_runner.go:195] Run: ls
	I0731 19:55:13.393760  145701 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 19:55:13.398189  145701 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 19:55:13.398216  145701 status.go:422] ha-235073-m03 apiserver status = Running (err=<nil>)
	I0731 19:55:13.398228  145701 status.go:257] ha-235073-m03 status: &{Name:ha-235073-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 19:55:13.398248  145701 status.go:255] checking status of ha-235073-m04 ...
	I0731 19:55:13.398537  145701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:55:13.398586  145701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:55:13.414622  145701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45159
	I0731 19:55:13.415152  145701 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:55:13.415699  145701 main.go:141] libmachine: Using API Version  1
	I0731 19:55:13.415722  145701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:55:13.416025  145701 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:55:13.416220  145701 main.go:141] libmachine: (ha-235073-m04) Calling .GetState
	I0731 19:55:13.417854  145701 status.go:330] ha-235073-m04 host status = "Running" (err=<nil>)
	I0731 19:55:13.417874  145701 host.go:66] Checking if "ha-235073-m04" exists ...
	I0731 19:55:13.418239  145701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:55:13.418287  145701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:55:13.433451  145701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34103
	I0731 19:55:13.433938  145701 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:55:13.434408  145701 main.go:141] libmachine: Using API Version  1
	I0731 19:55:13.434429  145701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:55:13.434841  145701 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:55:13.435061  145701 main.go:141] libmachine: (ha-235073-m04) Calling .GetIP
	I0731 19:55:13.437934  145701 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:55:13.438410  145701 main.go:141] libmachine: (ha-235073-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:7d:83", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:50:58 +0000 UTC Type:0 Mac:52:54:00:cc:7d:83 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-235073-m04 Clientid:01:52:54:00:cc:7d:83}
	I0731 19:55:13.438433  145701 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined IP address 192.168.39.62 and MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:55:13.438580  145701 host.go:66] Checking if "ha-235073-m04" exists ...
	I0731 19:55:13.438977  145701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:55:13.439021  145701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:55:13.453389  145701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36221
	I0731 19:55:13.453837  145701 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:55:13.454275  145701 main.go:141] libmachine: Using API Version  1
	I0731 19:55:13.454295  145701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:55:13.454669  145701 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:55:13.454870  145701 main.go:141] libmachine: (ha-235073-m04) Calling .DriverName
	I0731 19:55:13.455083  145701 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:55:13.455111  145701 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHHostname
	I0731 19:55:13.457716  145701 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:55:13.458127  145701 main.go:141] libmachine: (ha-235073-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:7d:83", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:50:58 +0000 UTC Type:0 Mac:52:54:00:cc:7d:83 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-235073-m04 Clientid:01:52:54:00:cc:7d:83}
	I0731 19:55:13.458162  145701 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined IP address 192.168.39.62 and MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:55:13.458313  145701 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHPort
	I0731 19:55:13.458479  145701 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHKeyPath
	I0731 19:55:13.458665  145701 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHUsername
	I0731 19:55:13.458812  145701 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m04/id_rsa Username:docker}
	I0731 19:55:13.541179  145701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:55:13.557396  145701 status.go:257] ha-235073-m04 status: &{Name:ha-235073-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-235073 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-235073 -n ha-235073
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-235073 logs -n 25: (1.395326861s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-235073 ssh -n                                                                 | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-235073 cp ha-235073-m03:/home/docker/cp-test.txt                              | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073:/home/docker/cp-test_ha-235073-m03_ha-235073.txt                       |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n                                                                 | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n ha-235073 sudo cat                                              | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | /home/docker/cp-test_ha-235073-m03_ha-235073.txt                                 |           |         |         |                     |                     |
	| cp      | ha-235073 cp ha-235073-m03:/home/docker/cp-test.txt                              | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m02:/home/docker/cp-test_ha-235073-m03_ha-235073-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n                                                                 | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n ha-235073-m02 sudo cat                                          | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | /home/docker/cp-test_ha-235073-m03_ha-235073-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-235073 cp ha-235073-m03:/home/docker/cp-test.txt                              | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m04:/home/docker/cp-test_ha-235073-m03_ha-235073-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n                                                                 | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n ha-235073-m04 sudo cat                                          | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | /home/docker/cp-test_ha-235073-m03_ha-235073-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-235073 cp testdata/cp-test.txt                                                | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n                                                                 | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-235073 cp ha-235073-m04:/home/docker/cp-test.txt                              | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3796763680/001/cp-test_ha-235073-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n                                                                 | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-235073 cp ha-235073-m04:/home/docker/cp-test.txt                              | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073:/home/docker/cp-test_ha-235073-m04_ha-235073.txt                       |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n                                                                 | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n ha-235073 sudo cat                                              | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | /home/docker/cp-test_ha-235073-m04_ha-235073.txt                                 |           |         |         |                     |                     |
	| cp      | ha-235073 cp ha-235073-m04:/home/docker/cp-test.txt                              | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m02:/home/docker/cp-test_ha-235073-m04_ha-235073-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n                                                                 | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n ha-235073-m02 sudo cat                                          | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | /home/docker/cp-test_ha-235073-m04_ha-235073-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-235073 cp ha-235073-m04:/home/docker/cp-test.txt                              | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m03:/home/docker/cp-test_ha-235073-m04_ha-235073-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n                                                                 | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n ha-235073-m03 sudo cat                                          | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | /home/docker/cp-test_ha-235073-m04_ha-235073-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-235073 node stop m02 -v=7                                                     | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-235073 node start m02 -v=7                                                    | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:54 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 19:45:58
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 19:45:58.226009  139843 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:45:58.226125  139843 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:45:58.226135  139843 out.go:304] Setting ErrFile to fd 2...
	I0731 19:45:58.226139  139843 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:45:58.226314  139843 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 19:45:58.226897  139843 out.go:298] Setting JSON to false
	I0731 19:45:58.228322  139843 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5294,"bootTime":1722449864,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 19:45:58.228583  139843 start.go:139] virtualization: kvm guest
	I0731 19:45:58.230861  139843 out.go:177] * [ha-235073] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 19:45:58.232284  139843 notify.go:220] Checking for updates...
	I0731 19:45:58.232346  139843 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 19:45:58.233738  139843 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 19:45:58.235009  139843 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 19:45:58.236378  139843 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 19:45:58.237754  139843 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 19:45:58.239041  139843 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 19:45:58.240384  139843 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 19:45:58.274375  139843 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 19:45:58.275858  139843 start.go:297] selected driver: kvm2
	I0731 19:45:58.275868  139843 start.go:901] validating driver "kvm2" against <nil>
	I0731 19:45:58.275878  139843 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 19:45:58.276618  139843 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:45:58.276707  139843 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-121704/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 19:45:58.291788  139843 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 19:45:58.291834  139843 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 19:45:58.292047  139843 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 19:45:58.292113  139843 cni.go:84] Creating CNI manager for ""
	I0731 19:45:58.292125  139843 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0731 19:45:58.292132  139843 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 19:45:58.292194  139843 start.go:340] cluster config:
	{Name:ha-235073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-235073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0731 19:45:58.292286  139843 iso.go:125] acquiring lock: {Name:mk69fc0fe37180b19ce91307b5e12ab2d0bd69fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:45:58.294032  139843 out.go:177] * Starting "ha-235073" primary control-plane node in "ha-235073" cluster
	I0731 19:45:58.295338  139843 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 19:45:58.295370  139843 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 19:45:58.295385  139843 cache.go:56] Caching tarball of preloaded images
	I0731 19:45:58.295472  139843 preload.go:172] Found /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 19:45:58.295483  139843 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 19:45:58.295783  139843 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/config.json ...
	I0731 19:45:58.295802  139843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/config.json: {Name:mk3eeddeb246ecc6b03da1587de41e99a8e651ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:45:58.295924  139843 start.go:360] acquireMachinesLock for ha-235073: {Name:mk0ee20c9dba367bb5e62f6affdfe6f589095d2a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 19:45:58.295951  139843 start.go:364] duration metric: took 15.527µs to acquireMachinesLock for "ha-235073"
	I0731 19:45:58.295967  139843 start.go:93] Provisioning new machine with config: &{Name:ha-235073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-235073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 19:45:58.296020  139843 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 19:45:58.297644  139843 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 19:45:58.297774  139843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:45:58.297813  139843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:45:58.311988  139843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40361
	I0731 19:45:58.312498  139843 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:45:58.313061  139843 main.go:141] libmachine: Using API Version  1
	I0731 19:45:58.313082  139843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:45:58.313469  139843 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:45:58.313682  139843 main.go:141] libmachine: (ha-235073) Calling .GetMachineName
	I0731 19:45:58.313804  139843 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:45:58.314007  139843 start.go:159] libmachine.API.Create for "ha-235073" (driver="kvm2")
	I0731 19:45:58.314037  139843 client.go:168] LocalClient.Create starting
	I0731 19:45:58.314073  139843 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem
	I0731 19:45:58.314107  139843 main.go:141] libmachine: Decoding PEM data...
	I0731 19:45:58.314123  139843 main.go:141] libmachine: Parsing certificate...
	I0731 19:45:58.314190  139843 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem
	I0731 19:45:58.314207  139843 main.go:141] libmachine: Decoding PEM data...
	I0731 19:45:58.314220  139843 main.go:141] libmachine: Parsing certificate...
	I0731 19:45:58.314234  139843 main.go:141] libmachine: Running pre-create checks...
	I0731 19:45:58.314244  139843 main.go:141] libmachine: (ha-235073) Calling .PreCreateCheck
	I0731 19:45:58.314573  139843 main.go:141] libmachine: (ha-235073) Calling .GetConfigRaw
	I0731 19:45:58.314934  139843 main.go:141] libmachine: Creating machine...
	I0731 19:45:58.314948  139843 main.go:141] libmachine: (ha-235073) Calling .Create
	I0731 19:45:58.315093  139843 main.go:141] libmachine: (ha-235073) Creating KVM machine...
	I0731 19:45:58.316257  139843 main.go:141] libmachine: (ha-235073) DBG | found existing default KVM network
	I0731 19:45:58.316963  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:45:58.316827  139866 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f330}
	I0731 19:45:58.317006  139843 main.go:141] libmachine: (ha-235073) DBG | created network xml: 
	I0731 19:45:58.317030  139843 main.go:141] libmachine: (ha-235073) DBG | <network>
	I0731 19:45:58.317043  139843 main.go:141] libmachine: (ha-235073) DBG |   <name>mk-ha-235073</name>
	I0731 19:45:58.317052  139843 main.go:141] libmachine: (ha-235073) DBG |   <dns enable='no'/>
	I0731 19:45:58.317063  139843 main.go:141] libmachine: (ha-235073) DBG |   
	I0731 19:45:58.317073  139843 main.go:141] libmachine: (ha-235073) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0731 19:45:58.317082  139843 main.go:141] libmachine: (ha-235073) DBG |     <dhcp>
	I0731 19:45:58.317093  139843 main.go:141] libmachine: (ha-235073) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0731 19:45:58.317120  139843 main.go:141] libmachine: (ha-235073) DBG |     </dhcp>
	I0731 19:45:58.317153  139843 main.go:141] libmachine: (ha-235073) DBG |   </ip>
	I0731 19:45:58.317165  139843 main.go:141] libmachine: (ha-235073) DBG |   
	I0731 19:45:58.317172  139843 main.go:141] libmachine: (ha-235073) DBG | </network>
	I0731 19:45:58.317179  139843 main.go:141] libmachine: (ha-235073) DBG | 
	I0731 19:45:58.321974  139843 main.go:141] libmachine: (ha-235073) DBG | trying to create private KVM network mk-ha-235073 192.168.39.0/24...
	I0731 19:45:58.386110  139843 main.go:141] libmachine: (ha-235073) DBG | private KVM network mk-ha-235073 192.168.39.0/24 created
	I0731 19:45:58.386149  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:45:58.386078  139866 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 19:45:58.386163  139843 main.go:141] libmachine: (ha-235073) Setting up store path in /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073 ...
	I0731 19:45:58.386182  139843 main.go:141] libmachine: (ha-235073) Building disk image from file:///home/jenkins/minikube-integration/19355-121704/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0731 19:45:58.386316  139843 main.go:141] libmachine: (ha-235073) Downloading /home/jenkins/minikube-integration/19355-121704/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19355-121704/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0731 19:45:58.645435  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:45:58.645280  139866 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa...
	I0731 19:45:58.831858  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:45:58.831697  139866 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/ha-235073.rawdisk...
	I0731 19:45:58.831883  139843 main.go:141] libmachine: (ha-235073) DBG | Writing magic tar header
	I0731 19:45:58.831932  139843 main.go:141] libmachine: (ha-235073) DBG | Writing SSH key tar header
	I0731 19:45:58.831970  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:45:58.831844  139866 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073 ...
	I0731 19:45:58.831987  139843 main.go:141] libmachine: (ha-235073) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073
	I0731 19:45:58.832044  139843 main.go:141] libmachine: (ha-235073) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073 (perms=drwx------)
	I0731 19:45:58.832064  139843 main.go:141] libmachine: (ha-235073) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704/.minikube/machines
	I0731 19:45:58.832072  139843 main.go:141] libmachine: (ha-235073) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704/.minikube/machines (perms=drwxr-xr-x)
	I0731 19:45:58.832081  139843 main.go:141] libmachine: (ha-235073) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 19:45:58.832092  139843 main.go:141] libmachine: (ha-235073) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704
	I0731 19:45:58.832101  139843 main.go:141] libmachine: (ha-235073) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704/.minikube (perms=drwxr-xr-x)
	I0731 19:45:58.832107  139843 main.go:141] libmachine: (ha-235073) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 19:45:58.832114  139843 main.go:141] libmachine: (ha-235073) DBG | Checking permissions on dir: /home/jenkins
	I0731 19:45:58.832121  139843 main.go:141] libmachine: (ha-235073) DBG | Checking permissions on dir: /home
	I0731 19:45:58.832130  139843 main.go:141] libmachine: (ha-235073) DBG | Skipping /home - not owner
	I0731 19:45:58.832139  139843 main.go:141] libmachine: (ha-235073) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704 (perms=drwxrwxr-x)
	I0731 19:45:58.832146  139843 main.go:141] libmachine: (ha-235073) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 19:45:58.832153  139843 main.go:141] libmachine: (ha-235073) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 19:45:58.832160  139843 main.go:141] libmachine: (ha-235073) Creating domain...
	I0731 19:45:58.833455  139843 main.go:141] libmachine: (ha-235073) define libvirt domain using xml: 
	I0731 19:45:58.833482  139843 main.go:141] libmachine: (ha-235073) <domain type='kvm'>
	I0731 19:45:58.833492  139843 main.go:141] libmachine: (ha-235073)   <name>ha-235073</name>
	I0731 19:45:58.833503  139843 main.go:141] libmachine: (ha-235073)   <memory unit='MiB'>2200</memory>
	I0731 19:45:58.833512  139843 main.go:141] libmachine: (ha-235073)   <vcpu>2</vcpu>
	I0731 19:45:58.833519  139843 main.go:141] libmachine: (ha-235073)   <features>
	I0731 19:45:58.833527  139843 main.go:141] libmachine: (ha-235073)     <acpi/>
	I0731 19:45:58.833534  139843 main.go:141] libmachine: (ha-235073)     <apic/>
	I0731 19:45:58.833542  139843 main.go:141] libmachine: (ha-235073)     <pae/>
	I0731 19:45:58.833559  139843 main.go:141] libmachine: (ha-235073)     
	I0731 19:45:58.833567  139843 main.go:141] libmachine: (ha-235073)   </features>
	I0731 19:45:58.833576  139843 main.go:141] libmachine: (ha-235073)   <cpu mode='host-passthrough'>
	I0731 19:45:58.833594  139843 main.go:141] libmachine: (ha-235073)   
	I0731 19:45:58.833617  139843 main.go:141] libmachine: (ha-235073)   </cpu>
	I0731 19:45:58.833626  139843 main.go:141] libmachine: (ha-235073)   <os>
	I0731 19:45:58.833637  139843 main.go:141] libmachine: (ha-235073)     <type>hvm</type>
	I0731 19:45:58.833649  139843 main.go:141] libmachine: (ha-235073)     <boot dev='cdrom'/>
	I0731 19:45:58.833658  139843 main.go:141] libmachine: (ha-235073)     <boot dev='hd'/>
	I0731 19:45:58.833665  139843 main.go:141] libmachine: (ha-235073)     <bootmenu enable='no'/>
	I0731 19:45:58.833671  139843 main.go:141] libmachine: (ha-235073)   </os>
	I0731 19:45:58.833677  139843 main.go:141] libmachine: (ha-235073)   <devices>
	I0731 19:45:58.833685  139843 main.go:141] libmachine: (ha-235073)     <disk type='file' device='cdrom'>
	I0731 19:45:58.833735  139843 main.go:141] libmachine: (ha-235073)       <source file='/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/boot2docker.iso'/>
	I0731 19:45:58.833759  139843 main.go:141] libmachine: (ha-235073)       <target dev='hdc' bus='scsi'/>
	I0731 19:45:58.833773  139843 main.go:141] libmachine: (ha-235073)       <readonly/>
	I0731 19:45:58.833780  139843 main.go:141] libmachine: (ha-235073)     </disk>
	I0731 19:45:58.833793  139843 main.go:141] libmachine: (ha-235073)     <disk type='file' device='disk'>
	I0731 19:45:58.833805  139843 main.go:141] libmachine: (ha-235073)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 19:45:58.833821  139843 main.go:141] libmachine: (ha-235073)       <source file='/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/ha-235073.rawdisk'/>
	I0731 19:45:58.833836  139843 main.go:141] libmachine: (ha-235073)       <target dev='hda' bus='virtio'/>
	I0731 19:45:58.833850  139843 main.go:141] libmachine: (ha-235073)     </disk>
	I0731 19:45:58.833860  139843 main.go:141] libmachine: (ha-235073)     <interface type='network'>
	I0731 19:45:58.833870  139843 main.go:141] libmachine: (ha-235073)       <source network='mk-ha-235073'/>
	I0731 19:45:58.833879  139843 main.go:141] libmachine: (ha-235073)       <model type='virtio'/>
	I0731 19:45:58.833886  139843 main.go:141] libmachine: (ha-235073)     </interface>
	I0731 19:45:58.833897  139843 main.go:141] libmachine: (ha-235073)     <interface type='network'>
	I0731 19:45:58.833919  139843 main.go:141] libmachine: (ha-235073)       <source network='default'/>
	I0731 19:45:58.833938  139843 main.go:141] libmachine: (ha-235073)       <model type='virtio'/>
	I0731 19:45:58.833950  139843 main.go:141] libmachine: (ha-235073)     </interface>
	I0731 19:45:58.833961  139843 main.go:141] libmachine: (ha-235073)     <serial type='pty'>
	I0731 19:45:58.833973  139843 main.go:141] libmachine: (ha-235073)       <target port='0'/>
	I0731 19:45:58.833983  139843 main.go:141] libmachine: (ha-235073)     </serial>
	I0731 19:45:58.834010  139843 main.go:141] libmachine: (ha-235073)     <console type='pty'>
	I0731 19:45:58.834027  139843 main.go:141] libmachine: (ha-235073)       <target type='serial' port='0'/>
	I0731 19:45:58.834039  139843 main.go:141] libmachine: (ha-235073)     </console>
	I0731 19:45:58.834047  139843 main.go:141] libmachine: (ha-235073)     <rng model='virtio'>
	I0731 19:45:58.834058  139843 main.go:141] libmachine: (ha-235073)       <backend model='random'>/dev/random</backend>
	I0731 19:45:58.834065  139843 main.go:141] libmachine: (ha-235073)     </rng>
	I0731 19:45:58.834070  139843 main.go:141] libmachine: (ha-235073)     
	I0731 19:45:58.834073  139843 main.go:141] libmachine: (ha-235073)     
	I0731 19:45:58.834080  139843 main.go:141] libmachine: (ha-235073)   </devices>
	I0731 19:45:58.834084  139843 main.go:141] libmachine: (ha-235073) </domain>
	I0731 19:45:58.834098  139843 main.go:141] libmachine: (ha-235073) 
	I0731 19:45:58.838172  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:91:35:40 in network default
	I0731 19:45:58.838688  139843 main.go:141] libmachine: (ha-235073) Ensuring networks are active...
	I0731 19:45:58.838705  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:45:58.839241  139843 main.go:141] libmachine: (ha-235073) Ensuring network default is active
	I0731 19:45:58.839524  139843 main.go:141] libmachine: (ha-235073) Ensuring network mk-ha-235073 is active
	I0731 19:45:58.839948  139843 main.go:141] libmachine: (ha-235073) Getting domain xml...
	I0731 19:45:58.840528  139843 main.go:141] libmachine: (ha-235073) Creating domain...
	I0731 19:46:00.011490  139843 main.go:141] libmachine: (ha-235073) Waiting to get IP...
	I0731 19:46:00.012197  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:00.012529  139843 main.go:141] libmachine: (ha-235073) DBG | unable to find current IP address of domain ha-235073 in network mk-ha-235073
	I0731 19:46:00.012585  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:46:00.012518  139866 retry.go:31] will retry after 274.611149ms: waiting for machine to come up
	I0731 19:46:00.288981  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:00.289468  139843 main.go:141] libmachine: (ha-235073) DBG | unable to find current IP address of domain ha-235073 in network mk-ha-235073
	I0731 19:46:00.289496  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:46:00.289418  139866 retry.go:31] will retry after 345.869467ms: waiting for machine to come up
	I0731 19:46:00.637093  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:00.637491  139843 main.go:141] libmachine: (ha-235073) DBG | unable to find current IP address of domain ha-235073 in network mk-ha-235073
	I0731 19:46:00.637519  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:46:00.637440  139866 retry.go:31] will retry after 369.988704ms: waiting for machine to come up
	I0731 19:46:01.008943  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:01.009344  139843 main.go:141] libmachine: (ha-235073) DBG | unable to find current IP address of domain ha-235073 in network mk-ha-235073
	I0731 19:46:01.009377  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:46:01.009289  139866 retry.go:31] will retry after 444.790632ms: waiting for machine to come up
	I0731 19:46:01.455488  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:01.455918  139843 main.go:141] libmachine: (ha-235073) DBG | unable to find current IP address of domain ha-235073 in network mk-ha-235073
	I0731 19:46:01.455936  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:46:01.455886  139866 retry.go:31] will retry after 571.934824ms: waiting for machine to come up
	I0731 19:46:02.029661  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:02.030102  139843 main.go:141] libmachine: (ha-235073) DBG | unable to find current IP address of domain ha-235073 in network mk-ha-235073
	I0731 19:46:02.030130  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:46:02.030055  139866 retry.go:31] will retry after 821.5719ms: waiting for machine to come up
	I0731 19:46:02.852842  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:02.853142  139843 main.go:141] libmachine: (ha-235073) DBG | unable to find current IP address of domain ha-235073 in network mk-ha-235073
	I0731 19:46:02.853174  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:46:02.853085  139866 retry.go:31] will retry after 1.057355998s: waiting for machine to come up
	I0731 19:46:03.911898  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:03.912296  139843 main.go:141] libmachine: (ha-235073) DBG | unable to find current IP address of domain ha-235073 in network mk-ha-235073
	I0731 19:46:03.912324  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:46:03.912239  139866 retry.go:31] will retry after 1.140982402s: waiting for machine to come up
	I0731 19:46:05.054709  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:05.055046  139843 main.go:141] libmachine: (ha-235073) DBG | unable to find current IP address of domain ha-235073 in network mk-ha-235073
	I0731 19:46:05.055068  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:46:05.055013  139866 retry.go:31] will retry after 1.25607749s: waiting for machine to come up
	I0731 19:46:06.313657  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:06.314062  139843 main.go:141] libmachine: (ha-235073) DBG | unable to find current IP address of domain ha-235073 in network mk-ha-235073
	I0731 19:46:06.314090  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:46:06.314011  139866 retry.go:31] will retry after 2.299194759s: waiting for machine to come up
	I0731 19:46:08.615051  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:08.615548  139843 main.go:141] libmachine: (ha-235073) DBG | unable to find current IP address of domain ha-235073 in network mk-ha-235073
	I0731 19:46:08.615578  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:46:08.615494  139866 retry.go:31] will retry after 2.831140976s: waiting for machine to come up
	I0731 19:46:11.450444  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:11.450885  139843 main.go:141] libmachine: (ha-235073) DBG | unable to find current IP address of domain ha-235073 in network mk-ha-235073
	I0731 19:46:11.450914  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:46:11.450838  139866 retry.go:31] will retry after 2.851660254s: waiting for machine to come up
	I0731 19:46:14.304380  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:14.304871  139843 main.go:141] libmachine: (ha-235073) DBG | unable to find current IP address of domain ha-235073 in network mk-ha-235073
	I0731 19:46:14.304894  139843 main.go:141] libmachine: (ha-235073) DBG | I0731 19:46:14.304834  139866 retry.go:31] will retry after 3.780280162s: waiting for machine to come up
	I0731 19:46:18.086353  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:18.086858  139843 main.go:141] libmachine: (ha-235073) Found IP for machine: 192.168.39.146
	I0731 19:46:18.086880  139843 main.go:141] libmachine: (ha-235073) Reserving static IP address...
	I0731 19:46:18.086893  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has current primary IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:18.087230  139843 main.go:141] libmachine: (ha-235073) DBG | unable to find host DHCP lease matching {name: "ha-235073", mac: "52:54:00:81:60:31", ip: "192.168.39.146"} in network mk-ha-235073
	I0731 19:46:18.160399  139843 main.go:141] libmachine: (ha-235073) DBG | Getting to WaitForSSH function...
	I0731 19:46:18.160436  139843 main.go:141] libmachine: (ha-235073) Reserved static IP address: 192.168.39.146
	I0731 19:46:18.160451  139843 main.go:141] libmachine: (ha-235073) Waiting for SSH to be available...
	I0731 19:46:18.162832  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:18.163205  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:minikube Clientid:01:52:54:00:81:60:31}
	I0731 19:46:18.163238  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:18.163354  139843 main.go:141] libmachine: (ha-235073) DBG | Using SSH client type: external
	I0731 19:46:18.163372  139843 main.go:141] libmachine: (ha-235073) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa (-rw-------)
	I0731 19:46:18.163405  139843 main.go:141] libmachine: (ha-235073) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.146 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 19:46:18.163426  139843 main.go:141] libmachine: (ha-235073) DBG | About to run SSH command:
	I0731 19:46:18.163439  139843 main.go:141] libmachine: (ha-235073) DBG | exit 0
	I0731 19:46:18.285504  139843 main.go:141] libmachine: (ha-235073) DBG | SSH cmd err, output: <nil>: 
	I0731 19:46:18.285778  139843 main.go:141] libmachine: (ha-235073) KVM machine creation complete!
	I0731 19:46:18.286059  139843 main.go:141] libmachine: (ha-235073) Calling .GetConfigRaw
	I0731 19:46:18.286616  139843 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:46:18.286855  139843 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:46:18.287005  139843 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 19:46:18.287017  139843 main.go:141] libmachine: (ha-235073) Calling .GetState
	I0731 19:46:18.288490  139843 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 19:46:18.288504  139843 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 19:46:18.288517  139843 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 19:46:18.288525  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:46:18.290950  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:18.291370  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:18.291395  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:18.291544  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:46:18.291721  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:18.291919  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:18.292074  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:46:18.292231  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:46:18.292476  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0731 19:46:18.292491  139843 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 19:46:18.388762  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 19:46:18.388788  139843 main.go:141] libmachine: Detecting the provisioner...
	I0731 19:46:18.388795  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:46:18.391599  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:18.391950  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:18.391984  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:18.392146  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:46:18.392367  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:18.392543  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:18.392699  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:46:18.392858  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:46:18.393037  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0731 19:46:18.393051  139843 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 19:46:18.490368  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 19:46:18.490455  139843 main.go:141] libmachine: found compatible host: buildroot
	I0731 19:46:18.490463  139843 main.go:141] libmachine: Provisioning with buildroot...
	I0731 19:46:18.490470  139843 main.go:141] libmachine: (ha-235073) Calling .GetMachineName
	I0731 19:46:18.490790  139843 buildroot.go:166] provisioning hostname "ha-235073"
	I0731 19:46:18.490817  139843 main.go:141] libmachine: (ha-235073) Calling .GetMachineName
	I0731 19:46:18.490974  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:46:18.493867  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:18.494159  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:18.494186  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:18.494401  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:46:18.494609  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:18.494784  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:18.494912  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:46:18.495067  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:46:18.495288  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0731 19:46:18.495302  139843 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-235073 && echo "ha-235073" | sudo tee /etc/hostname
	I0731 19:46:18.607836  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-235073
	
	I0731 19:46:18.607873  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:46:18.610750  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:18.611100  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:18.611132  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:18.611270  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:46:18.611499  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:18.611662  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:18.611796  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:46:18.611949  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:46:18.612169  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0731 19:46:18.612196  139843 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-235073' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-235073/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-235073' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 19:46:18.718538  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 19:46:18.718583  139843 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 19:46:18.718606  139843 buildroot.go:174] setting up certificates
	I0731 19:46:18.718617  139843 provision.go:84] configureAuth start
	I0731 19:46:18.718626  139843 main.go:141] libmachine: (ha-235073) Calling .GetMachineName
	I0731 19:46:18.718956  139843 main.go:141] libmachine: (ha-235073) Calling .GetIP
	I0731 19:46:18.721716  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:18.722078  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:18.722116  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:18.722332  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:46:18.724853  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:18.725181  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:18.725211  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:18.725389  139843 provision.go:143] copyHostCerts
	I0731 19:46:18.725426  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 19:46:18.725469  139843 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 19:46:18.725486  139843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 19:46:18.725567  139843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 19:46:18.725676  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 19:46:18.725701  139843 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 19:46:18.725709  139843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 19:46:18.725748  139843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 19:46:18.725816  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 19:46:18.725840  139843 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 19:46:18.725845  139843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 19:46:18.725879  139843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 19:46:18.725959  139843 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.ha-235073 san=[127.0.0.1 192.168.39.146 ha-235073 localhost minikube]
	I0731 19:46:19.018788  139843 provision.go:177] copyRemoteCerts
	I0731 19:46:19.018860  139843 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 19:46:19.018891  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:46:19.021580  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.021860  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:19.021904  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.022018  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:46:19.022223  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:19.022424  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:46:19.022580  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:46:19.104173  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 19:46:19.104258  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 19:46:19.128347  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 19:46:19.128446  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0731 19:46:19.152561  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 19:46:19.152653  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 19:46:19.177423  139843 provision.go:87] duration metric: took 458.789911ms to configureAuth
	I0731 19:46:19.177460  139843 buildroot.go:189] setting minikube options for container-runtime
	I0731 19:46:19.177644  139843 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:46:19.177731  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:46:19.180417  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.180701  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:19.180723  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.180884  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:46:19.181101  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:19.181268  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:19.181413  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:46:19.181581  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:46:19.181749  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0731 19:46:19.181764  139843 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 19:46:19.444918  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 19:46:19.444960  139843 main.go:141] libmachine: Checking connection to Docker...
	I0731 19:46:19.444973  139843 main.go:141] libmachine: (ha-235073) Calling .GetURL
	I0731 19:46:19.446199  139843 main.go:141] libmachine: (ha-235073) DBG | Using libvirt version 6000000
	I0731 19:46:19.447983  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.448327  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:19.448359  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.448413  139843 main.go:141] libmachine: Docker is up and running!
	I0731 19:46:19.448424  139843 main.go:141] libmachine: Reticulating splines...
	I0731 19:46:19.448433  139843 client.go:171] duration metric: took 21.134389884s to LocalClient.Create
	I0731 19:46:19.448472  139843 start.go:167] duration metric: took 21.134465555s to libmachine.API.Create "ha-235073"
	I0731 19:46:19.448484  139843 start.go:293] postStartSetup for "ha-235073" (driver="kvm2")
	I0731 19:46:19.448496  139843 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 19:46:19.448521  139843 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:46:19.448782  139843 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 19:46:19.448805  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:46:19.450554  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.450860  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:19.450903  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.451018  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:46:19.451211  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:19.451379  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:46:19.451532  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:46:19.532377  139843 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 19:46:19.536707  139843 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 19:46:19.536732  139843 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 19:46:19.536788  139843 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 19:46:19.536857  139843 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 19:46:19.536868  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> /etc/ssl/certs/1288912.pem
	I0731 19:46:19.536958  139843 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 19:46:19.547066  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 19:46:19.570870  139843 start.go:296] duration metric: took 122.370251ms for postStartSetup
	I0731 19:46:19.570953  139843 main.go:141] libmachine: (ha-235073) Calling .GetConfigRaw
	I0731 19:46:19.571664  139843 main.go:141] libmachine: (ha-235073) Calling .GetIP
	I0731 19:46:19.574060  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.574413  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:19.574440  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.574599  139843 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/config.json ...
	I0731 19:46:19.574777  139843 start.go:128] duration metric: took 21.278745189s to createHost
	I0731 19:46:19.574799  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:46:19.576744  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.577001  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:19.577036  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.577205  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:46:19.577405  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:19.577604  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:19.577743  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:46:19.577922  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:46:19.578083  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0731 19:46:19.578095  139843 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 19:46:19.674714  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722455179.640290588
	
	I0731 19:46:19.674754  139843 fix.go:216] guest clock: 1722455179.640290588
	I0731 19:46:19.674762  139843 fix.go:229] Guest: 2024-07-31 19:46:19.640290588 +0000 UTC Remote: 2024-07-31 19:46:19.57478807 +0000 UTC m=+21.383718664 (delta=65.502518ms)
	I0731 19:46:19.674795  139843 fix.go:200] guest clock delta is within tolerance: 65.502518ms
	I0731 19:46:19.674804  139843 start.go:83] releasing machines lock for "ha-235073", held for 21.378844327s
	I0731 19:46:19.674825  139843 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:46:19.675108  139843 main.go:141] libmachine: (ha-235073) Calling .GetIP
	I0731 19:46:19.678117  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.678495  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:19.678523  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.678671  139843 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:46:19.679217  139843 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:46:19.679385  139843 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:46:19.679467  139843 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 19:46:19.679497  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:46:19.679625  139843 ssh_runner.go:195] Run: cat /version.json
	I0731 19:46:19.679650  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:46:19.682124  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.682320  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.682653  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:19.682689  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.682719  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:19.682719  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:46:19.682746  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:19.682820  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:46:19.682906  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:19.683023  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:19.683114  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:46:19.683177  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:46:19.683289  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:46:19.683332  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:46:19.778962  139843 ssh_runner.go:195] Run: systemctl --version
	I0731 19:46:19.785027  139843 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 19:46:19.947529  139843 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 19:46:19.954223  139843 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 19:46:19.954310  139843 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 19:46:19.970253  139843 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 19:46:19.970279  139843 start.go:495] detecting cgroup driver to use...
	I0731 19:46:19.970421  139843 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 19:46:19.986810  139843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 19:46:20.000725  139843 docker.go:217] disabling cri-docker service (if available) ...
	I0731 19:46:20.000788  139843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 19:46:20.014432  139843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 19:46:20.027667  139843 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 19:46:20.144356  139843 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 19:46:20.280021  139843 docker.go:233] disabling docker service ...
	I0731 19:46:20.280088  139843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 19:46:20.295165  139843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 19:46:20.309130  139843 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 19:46:20.437305  139843 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 19:46:20.547819  139843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 19:46:20.562190  139843 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 19:46:20.580796  139843 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 19:46:20.580861  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:46:20.591809  139843 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 19:46:20.591872  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:46:20.602731  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:46:20.613312  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:46:20.623950  139843 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 19:46:20.634837  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:46:20.645253  139843 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:46:20.662809  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:46:20.673628  139843 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 19:46:20.683315  139843 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 19:46:20.683381  139843 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 19:46:20.697196  139843 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 19:46:20.707078  139843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:46:20.818722  139843 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 19:46:20.961445  139843 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 19:46:20.961519  139843 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 19:46:20.966586  139843 start.go:563] Will wait 60s for crictl version
	I0731 19:46:20.966678  139843 ssh_runner.go:195] Run: which crictl
	I0731 19:46:20.970382  139843 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 19:46:21.006302  139843 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 19:46:21.006389  139843 ssh_runner.go:195] Run: crio --version
	I0731 19:46:21.034094  139843 ssh_runner.go:195] Run: crio --version
	I0731 19:46:21.062380  139843 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 19:46:21.063665  139843 main.go:141] libmachine: (ha-235073) Calling .GetIP
	I0731 19:46:21.066178  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:21.066535  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:21.066565  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:21.066791  139843 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 19:46:21.070835  139843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 19:46:21.083165  139843 kubeadm.go:883] updating cluster {Name:ha-235073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-235073 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 19:46:21.083268  139843 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 19:46:21.083308  139843 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 19:46:21.112818  139843 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 19:46:21.112908  139843 ssh_runner.go:195] Run: which lz4
	I0731 19:46:21.116693  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0731 19:46:21.116773  139843 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 19:46:21.120894  139843 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 19:46:21.120925  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 19:46:22.492408  139843 crio.go:462] duration metric: took 1.375652525s to copy over tarball
	I0731 19:46:22.492495  139843 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 19:46:24.583933  139843 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.091398881s)
	I0731 19:46:24.583967  139843 crio.go:469] duration metric: took 2.091524869s to extract the tarball
	I0731 19:46:24.583975  139843 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 19:46:24.621603  139843 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 19:46:24.669376  139843 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 19:46:24.669400  139843 cache_images.go:84] Images are preloaded, skipping loading
	I0731 19:46:24.669410  139843 kubeadm.go:934] updating node { 192.168.39.146 8443 v1.30.3 crio true true} ...
	I0731 19:46:24.669542  139843 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-235073 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-235073 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 19:46:24.669635  139843 ssh_runner.go:195] Run: crio config
	I0731 19:46:24.713852  139843 cni.go:84] Creating CNI manager for ""
	I0731 19:46:24.713876  139843 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 19:46:24.713889  139843 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 19:46:24.713920  139843 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.146 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-235073 NodeName:ha-235073 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.146"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.146 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 19:46:24.714093  139843 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.146
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-235073"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.146
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.146"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 19:46:24.714121  139843 kube-vip.go:115] generating kube-vip config ...
	I0731 19:46:24.714174  139843 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 19:46:24.731610  139843 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 19:46:24.731721  139843 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0731 19:46:24.731791  139843 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 19:46:24.741137  139843 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 19:46:24.741192  139843 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0731 19:46:24.750015  139843 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0731 19:46:24.765598  139843 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 19:46:24.781317  139843 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0731 19:46:24.797245  139843 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0731 19:46:24.813104  139843 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 19:46:24.816768  139843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 19:46:24.828262  139843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:46:24.941199  139843 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 19:46:24.957225  139843 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073 for IP: 192.168.39.146
	I0731 19:46:24.957251  139843 certs.go:194] generating shared ca certs ...
	I0731 19:46:24.957273  139843 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:46:24.957485  139843 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 19:46:24.957554  139843 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 19:46:24.957570  139843 certs.go:256] generating profile certs ...
	I0731 19:46:24.957666  139843 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/client.key
	I0731 19:46:24.957686  139843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/client.crt with IP's: []
	I0731 19:46:25.138659  139843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/client.crt ...
	I0731 19:46:25.138691  139843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/client.crt: {Name:mk8eeb47ca9173eddfd8196b7e593e298c83e50a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:46:25.138881  139843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/client.key ...
	I0731 19:46:25.138896  139843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/client.key: {Name:mkfa9697e2ebe61beb186a68c7c9645a0af9abc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:46:25.139002  139843 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key.82950da9
	I0731 19:46:25.139021  139843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt.82950da9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.146 192.168.39.254]
	I0731 19:46:25.217551  139843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt.82950da9 ...
	I0731 19:46:25.217584  139843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt.82950da9: {Name:mk11b1d3f3ac82a08a7990ea92b49f5707becbd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:46:25.217758  139843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key.82950da9 ...
	I0731 19:46:25.217777  139843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key.82950da9: {Name:mka383161a89784e9944aae91199cdf6fda371f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:46:25.217874  139843 certs.go:381] copying /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt.82950da9 -> /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt
	I0731 19:46:25.217988  139843 certs.go:385] copying /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key.82950da9 -> /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key
	I0731 19:46:25.218084  139843 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.key
	I0731 19:46:25.218104  139843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.crt with IP's: []
	I0731 19:46:25.489830  139843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.crt ...
	I0731 19:46:25.489864  139843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.crt: {Name:mkcd956d75512ed26c96feee86155abe04d06817 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:46:25.490048  139843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.key ...
	I0731 19:46:25.490062  139843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.key: {Name:mk54989aa19e6971e17508247521aa4df1689b4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:46:25.490161  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 19:46:25.490183  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 19:46:25.490200  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 19:46:25.490217  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 19:46:25.490236  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 19:46:25.490255  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 19:46:25.490273  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 19:46:25.490293  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 19:46:25.490351  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 19:46:25.490401  139843 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 19:46:25.490414  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 19:46:25.490447  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 19:46:25.490510  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 19:46:25.490555  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 19:46:25.490614  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 19:46:25.490655  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:46:25.490675  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem -> /usr/share/ca-certificates/128891.pem
	I0731 19:46:25.490696  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> /usr/share/ca-certificates/1288912.pem
	I0731 19:46:25.491277  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 19:46:25.516812  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 19:46:25.539578  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 19:46:25.561873  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 19:46:25.584064  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 19:46:25.606376  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 19:46:25.628567  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 19:46:25.651078  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 19:46:25.673604  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 19:46:25.696165  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 19:46:25.720811  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 19:46:25.744829  139843 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 19:46:25.778933  139843 ssh_runner.go:195] Run: openssl version
	I0731 19:46:25.786952  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 19:46:25.799457  139843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 19:46:25.804084  139843 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 19:46:25.804144  139843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 19:46:25.809832  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 19:46:25.820347  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 19:46:25.830602  139843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 19:46:25.834777  139843 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 19:46:25.834809  139843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 19:46:25.840082  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 19:46:25.850491  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 19:46:25.860978  139843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:46:25.865620  139843 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:46:25.865679  139843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:46:25.871289  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 19:46:25.881977  139843 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 19:46:25.886017  139843 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 19:46:25.886065  139843 kubeadm.go:392] StartCluster: {Name:ha-235073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-235073 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:46:25.886155  139843 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 19:46:25.886201  139843 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 19:46:25.925417  139843 cri.go:89] found id: ""
	I0731 19:46:25.925490  139843 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 19:46:25.936024  139843 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 19:46:25.950779  139843 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 19:46:25.962133  139843 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 19:46:25.962150  139843 kubeadm.go:157] found existing configuration files:
	
	I0731 19:46:25.962207  139843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 19:46:25.972276  139843 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 19:46:25.972336  139843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 19:46:25.982555  139843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 19:46:25.992306  139843 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 19:46:25.992399  139843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 19:46:26.002240  139843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 19:46:26.011658  139843 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 19:46:26.011721  139843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 19:46:26.021447  139843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 19:46:26.030629  139843 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 19:46:26.030685  139843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 19:46:26.040128  139843 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 19:46:26.295845  139843 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 19:46:38.276896  139843 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 19:46:38.276953  139843 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 19:46:38.277036  139843 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 19:46:38.277141  139843 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 19:46:38.277283  139843 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 19:46:38.277369  139843 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 19:46:38.279111  139843 out.go:204]   - Generating certificates and keys ...
	I0731 19:46:38.279204  139843 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 19:46:38.279296  139843 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 19:46:38.279407  139843 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 19:46:38.279472  139843 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 19:46:38.279528  139843 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 19:46:38.279589  139843 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 19:46:38.279637  139843 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 19:46:38.279758  139843 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-235073 localhost] and IPs [192.168.39.146 127.0.0.1 ::1]
	I0731 19:46:38.279811  139843 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 19:46:38.279981  139843 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-235073 localhost] and IPs [192.168.39.146 127.0.0.1 ::1]
	I0731 19:46:38.280066  139843 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 19:46:38.280154  139843 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 19:46:38.280211  139843 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 19:46:38.280290  139843 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 19:46:38.280364  139843 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 19:46:38.280430  139843 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 19:46:38.280502  139843 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 19:46:38.280584  139843 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 19:46:38.280657  139843 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 19:46:38.280770  139843 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 19:46:38.280857  139843 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 19:46:38.282428  139843 out.go:204]   - Booting up control plane ...
	I0731 19:46:38.282505  139843 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 19:46:38.282600  139843 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 19:46:38.282691  139843 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 19:46:38.282817  139843 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 19:46:38.282941  139843 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 19:46:38.282994  139843 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 19:46:38.283119  139843 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 19:46:38.283182  139843 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 19:46:38.283237  139843 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.374653ms
	I0731 19:46:38.283296  139843 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 19:46:38.283361  139843 kubeadm.go:310] [api-check] The API server is healthy after 6.112498134s
	I0731 19:46:38.283496  139843 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 19:46:38.283660  139843 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 19:46:38.283716  139843 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 19:46:38.283860  139843 kubeadm.go:310] [mark-control-plane] Marking the node ha-235073 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 19:46:38.283913  139843 kubeadm.go:310] [bootstrap-token] Using token: 6dy6ds.nufllor3coa5iqmk
	I0731 19:46:38.285367  139843 out.go:204]   - Configuring RBAC rules ...
	I0731 19:46:38.285458  139843 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 19:46:38.285541  139843 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 19:46:38.285660  139843 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 19:46:38.285760  139843 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 19:46:38.285849  139843 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 19:46:38.285923  139843 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 19:46:38.286037  139843 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 19:46:38.286099  139843 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 19:46:38.286138  139843 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 19:46:38.286144  139843 kubeadm.go:310] 
	I0731 19:46:38.286214  139843 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 19:46:38.286229  139843 kubeadm.go:310] 
	I0731 19:46:38.286291  139843 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 19:46:38.286297  139843 kubeadm.go:310] 
	I0731 19:46:38.286336  139843 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 19:46:38.286394  139843 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 19:46:38.286477  139843 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 19:46:38.286489  139843 kubeadm.go:310] 
	I0731 19:46:38.286540  139843 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 19:46:38.286550  139843 kubeadm.go:310] 
	I0731 19:46:38.286588  139843 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 19:46:38.286594  139843 kubeadm.go:310] 
	I0731 19:46:38.286647  139843 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 19:46:38.286732  139843 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 19:46:38.286794  139843 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 19:46:38.286799  139843 kubeadm.go:310] 
	I0731 19:46:38.286883  139843 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 19:46:38.286962  139843 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 19:46:38.286971  139843 kubeadm.go:310] 
	I0731 19:46:38.287067  139843 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6dy6ds.nufllor3coa5iqmk \
	I0731 19:46:38.287207  139843 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:227a67c5218b331dd5f7132ea28d97c9a8049ca33dc9adbb2077964e102f3559 \
	I0731 19:46:38.287229  139843 kubeadm.go:310] 	--control-plane 
	I0731 19:46:38.287233  139843 kubeadm.go:310] 
	I0731 19:46:38.287303  139843 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 19:46:38.287309  139843 kubeadm.go:310] 
	I0731 19:46:38.287378  139843 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6dy6ds.nufllor3coa5iqmk \
	I0731 19:46:38.287471  139843 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:227a67c5218b331dd5f7132ea28d97c9a8049ca33dc9adbb2077964e102f3559 
	I0731 19:46:38.287486  139843 cni.go:84] Creating CNI manager for ""
	I0731 19:46:38.287494  139843 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 19:46:38.289660  139843 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0731 19:46:38.290982  139843 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0731 19:46:38.296573  139843 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0731 19:46:38.296589  139843 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0731 19:46:38.314588  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0731 19:46:38.674906  139843 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 19:46:38.675011  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:38.675011  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-235073 minikube.k8s.io/updated_at=2024_07_31T19_46_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825 minikube.k8s.io/name=ha-235073 minikube.k8s.io/primary=true
	I0731 19:46:38.689669  139843 ops.go:34] apiserver oom_adj: -16
	I0731 19:46:38.806046  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:39.306767  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:39.806688  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:40.306926  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:40.807017  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:41.306141  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:41.807048  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:42.306188  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:42.806087  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:43.306530  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:43.807036  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:44.306293  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:44.806543  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:45.306436  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:45.806665  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:46.306320  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:46.806299  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:47.306983  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:47.806158  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:48.306491  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:48.806404  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:49.306555  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:49.806376  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:50.306188  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:50.806567  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 19:46:50.920724  139843 kubeadm.go:1113] duration metric: took 12.245789791s to wait for elevateKubeSystemPrivileges
	I0731 19:46:50.920770  139843 kubeadm.go:394] duration metric: took 25.034709102s to StartCluster
	I0731 19:46:50.920795  139843 settings.go:142] acquiring lock: {Name:mk1f43113b3e416d694056e4890ac819b020378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:46:50.920881  139843 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 19:46:50.922029  139843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:46:50.922387  139843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 19:46:50.922406  139843 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 19:46:50.922438  139843 start.go:241] waiting for startup goroutines ...
	I0731 19:46:50.922468  139843 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 19:46:50.922552  139843 addons.go:69] Setting storage-provisioner=true in profile "ha-235073"
	I0731 19:46:50.922565  139843 addons.go:69] Setting default-storageclass=true in profile "ha-235073"
	I0731 19:46:50.922659  139843 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-235073"
	I0731 19:46:50.922669  139843 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:46:50.922585  139843 addons.go:234] Setting addon storage-provisioner=true in "ha-235073"
	I0731 19:46:50.922729  139843 host.go:66] Checking if "ha-235073" exists ...
	I0731 19:46:50.923203  139843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:46:50.923210  139843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:46:50.923266  139843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:46:50.923365  139843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:46:50.938471  139843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34155
	I0731 19:46:50.938471  139843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33841
	I0731 19:46:50.939025  139843 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:46:50.939054  139843 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:46:50.939700  139843 main.go:141] libmachine: Using API Version  1
	I0731 19:46:50.939718  139843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:46:50.939774  139843 main.go:141] libmachine: Using API Version  1
	I0731 19:46:50.939793  139843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:46:50.940072  139843 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:46:50.940153  139843 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:46:50.940273  139843 main.go:141] libmachine: (ha-235073) Calling .GetState
	I0731 19:46:50.940878  139843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:46:50.940926  139843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:46:50.942744  139843 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 19:46:50.943032  139843 kapi.go:59] client config for ha-235073: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/client.crt", KeyFile:"/home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/client.key", CAFile:"/home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 19:46:50.943550  139843 cert_rotation.go:137] Starting client certificate rotation controller
	I0731 19:46:50.943726  139843 addons.go:234] Setting addon default-storageclass=true in "ha-235073"
	I0731 19:46:50.943759  139843 host.go:66] Checking if "ha-235073" exists ...
	I0731 19:46:50.944078  139843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:46:50.944123  139843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:46:50.955948  139843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36181
	I0731 19:46:50.956382  139843 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:46:50.956863  139843 main.go:141] libmachine: Using API Version  1
	I0731 19:46:50.956888  139843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:46:50.957205  139843 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:46:50.957399  139843 main.go:141] libmachine: (ha-235073) Calling .GetState
	I0731 19:46:50.959014  139843 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:46:50.961123  139843 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 19:46:50.962572  139843 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 19:46:50.962594  139843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 19:46:50.962614  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:46:50.963249  139843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41241
	I0731 19:46:50.963768  139843 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:46:50.964334  139843 main.go:141] libmachine: Using API Version  1
	I0731 19:46:50.964357  139843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:46:50.964712  139843 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:46:50.965249  139843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:46:50.965320  139843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:46:50.965867  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:50.966298  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:50.966325  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:50.966605  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:46:50.966787  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:50.966928  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:46:50.967069  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:46:50.980464  139843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35245
	I0731 19:46:50.980863  139843 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:46:50.981352  139843 main.go:141] libmachine: Using API Version  1
	I0731 19:46:50.981377  139843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:46:50.981670  139843 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:46:50.981847  139843 main.go:141] libmachine: (ha-235073) Calling .GetState
	I0731 19:46:50.983303  139843 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:46:50.983488  139843 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 19:46:50.983503  139843 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 19:46:50.983517  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:46:50.986171  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:50.986605  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:46:50.986631  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:46:50.986786  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:46:50.986944  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:46:50.987093  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:46:50.987248  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:46:51.033166  139843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 19:46:51.128226  139843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 19:46:51.147741  139843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 19:46:51.516879  139843 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0731 19:46:51.794650  139843 main.go:141] libmachine: Making call to close driver server
	I0731 19:46:51.794689  139843 main.go:141] libmachine: (ha-235073) Calling .Close
	I0731 19:46:51.794664  139843 main.go:141] libmachine: Making call to close driver server
	I0731 19:46:51.794748  139843 main.go:141] libmachine: (ha-235073) Calling .Close
	I0731 19:46:51.795071  139843 main.go:141] libmachine: (ha-235073) DBG | Closing plugin on server side
	I0731 19:46:51.795079  139843 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:46:51.795097  139843 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:46:51.795106  139843 main.go:141] libmachine: Making call to close driver server
	I0731 19:46:51.795113  139843 main.go:141] libmachine: (ha-235073) Calling .Close
	I0731 19:46:51.795120  139843 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:46:51.795134  139843 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:46:51.795071  139843 main.go:141] libmachine: (ha-235073) DBG | Closing plugin on server side
	I0731 19:46:51.795147  139843 main.go:141] libmachine: Making call to close driver server
	I0731 19:46:51.795254  139843 main.go:141] libmachine: (ha-235073) Calling .Close
	I0731 19:46:51.795327  139843 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:46:51.795338  139843 main.go:141] libmachine: (ha-235073) DBG | Closing plugin on server side
	I0731 19:46:51.795342  139843 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:46:51.795525  139843 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:46:51.795538  139843 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:46:51.795557  139843 main.go:141] libmachine: (ha-235073) DBG | Closing plugin on server side
	I0731 19:46:51.795721  139843 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0731 19:46:51.795735  139843 round_trippers.go:469] Request Headers:
	I0731 19:46:51.795746  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:46:51.795753  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:46:51.815665  139843 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0731 19:46:51.817344  139843 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0731 19:46:51.817362  139843 round_trippers.go:469] Request Headers:
	I0731 19:46:51.817374  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:46:51.817379  139843 round_trippers.go:473]     Content-Type: application/json
	I0731 19:46:51.817384  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:46:51.821962  139843 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 19:46:51.822144  139843 main.go:141] libmachine: Making call to close driver server
	I0731 19:46:51.822162  139843 main.go:141] libmachine: (ha-235073) Calling .Close
	I0731 19:46:51.822438  139843 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:46:51.822455  139843 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:46:51.824297  139843 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0731 19:46:51.825435  139843 addons.go:510] duration metric: took 902.97956ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0731 19:46:51.825470  139843 start.go:246] waiting for cluster config update ...
	I0731 19:46:51.825485  139843 start.go:255] writing updated cluster config ...
	I0731 19:46:51.827099  139843 out.go:177] 
	I0731 19:46:51.828566  139843 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:46:51.828645  139843 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/config.json ...
	I0731 19:46:51.830552  139843 out.go:177] * Starting "ha-235073-m02" control-plane node in "ha-235073" cluster
	I0731 19:46:51.831795  139843 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 19:46:51.831816  139843 cache.go:56] Caching tarball of preloaded images
	I0731 19:46:51.831925  139843 preload.go:172] Found /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 19:46:51.831939  139843 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 19:46:51.831999  139843 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/config.json ...
	I0731 19:46:51.832161  139843 start.go:360] acquireMachinesLock for ha-235073-m02: {Name:mk0ee20c9dba367bb5e62f6affdfe6f589095d2a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 19:46:51.832202  139843 start.go:364] duration metric: took 23.256µs to acquireMachinesLock for "ha-235073-m02"
	I0731 19:46:51.832218  139843 start.go:93] Provisioning new machine with config: &{Name:ha-235073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-235073 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 19:46:51.832287  139843 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0731 19:46:51.833957  139843 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 19:46:51.834035  139843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:46:51.834067  139843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:46:51.848458  139843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35869
	I0731 19:46:51.848928  139843 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:46:51.849449  139843 main.go:141] libmachine: Using API Version  1
	I0731 19:46:51.849472  139843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:46:51.849765  139843 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:46:51.849939  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetMachineName
	I0731 19:46:51.850082  139843 main.go:141] libmachine: (ha-235073-m02) Calling .DriverName
	I0731 19:46:51.850273  139843 start.go:159] libmachine.API.Create for "ha-235073" (driver="kvm2")
	I0731 19:46:51.850301  139843 client.go:168] LocalClient.Create starting
	I0731 19:46:51.850334  139843 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem
	I0731 19:46:51.850371  139843 main.go:141] libmachine: Decoding PEM data...
	I0731 19:46:51.850388  139843 main.go:141] libmachine: Parsing certificate...
	I0731 19:46:51.850462  139843 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem
	I0731 19:46:51.850486  139843 main.go:141] libmachine: Decoding PEM data...
	I0731 19:46:51.850502  139843 main.go:141] libmachine: Parsing certificate...
	I0731 19:46:51.850525  139843 main.go:141] libmachine: Running pre-create checks...
	I0731 19:46:51.850536  139843 main.go:141] libmachine: (ha-235073-m02) Calling .PreCreateCheck
	I0731 19:46:51.850681  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetConfigRaw
	I0731 19:46:51.851052  139843 main.go:141] libmachine: Creating machine...
	I0731 19:46:51.851064  139843 main.go:141] libmachine: (ha-235073-m02) Calling .Create
	I0731 19:46:51.851175  139843 main.go:141] libmachine: (ha-235073-m02) Creating KVM machine...
	I0731 19:46:51.852291  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found existing default KVM network
	I0731 19:46:51.852447  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found existing private KVM network mk-ha-235073
	I0731 19:46:51.852595  139843 main.go:141] libmachine: (ha-235073-m02) Setting up store path in /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02 ...
	I0731 19:46:51.852617  139843 main.go:141] libmachine: (ha-235073-m02) Building disk image from file:///home/jenkins/minikube-integration/19355-121704/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0731 19:46:51.852718  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:46:51.852582  140223 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 19:46:51.852806  139843 main.go:141] libmachine: (ha-235073-m02) Downloading /home/jenkins/minikube-integration/19355-121704/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19355-121704/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0731 19:46:52.129760  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:46:52.129625  140223 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02/id_rsa...
	I0731 19:46:52.220476  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:46:52.220360  140223 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02/ha-235073-m02.rawdisk...
	I0731 19:46:52.220506  139843 main.go:141] libmachine: (ha-235073-m02) DBG | Writing magic tar header
	I0731 19:46:52.220518  139843 main.go:141] libmachine: (ha-235073-m02) DBG | Writing SSH key tar header
	I0731 19:46:52.220533  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:46:52.220501  140223 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02 ...
	I0731 19:46:52.220673  139843 main.go:141] libmachine: (ha-235073-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02
	I0731 19:46:52.220711  139843 main.go:141] libmachine: (ha-235073-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704/.minikube/machines
	I0731 19:46:52.220727  139843 main.go:141] libmachine: (ha-235073-m02) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02 (perms=drwx------)
	I0731 19:46:52.220747  139843 main.go:141] libmachine: (ha-235073-m02) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704/.minikube/machines (perms=drwxr-xr-x)
	I0731 19:46:52.220759  139843 main.go:141] libmachine: (ha-235073-m02) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704/.minikube (perms=drwxr-xr-x)
	I0731 19:46:52.220771  139843 main.go:141] libmachine: (ha-235073-m02) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704 (perms=drwxrwxr-x)
	I0731 19:46:52.220784  139843 main.go:141] libmachine: (ha-235073-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 19:46:52.220794  139843 main.go:141] libmachine: (ha-235073-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 19:46:52.220810  139843 main.go:141] libmachine: (ha-235073-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 19:46:52.220821  139843 main.go:141] libmachine: (ha-235073-m02) Creating domain...
	I0731 19:46:52.220833  139843 main.go:141] libmachine: (ha-235073-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704
	I0731 19:46:52.220844  139843 main.go:141] libmachine: (ha-235073-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 19:46:52.220853  139843 main.go:141] libmachine: (ha-235073-m02) DBG | Checking permissions on dir: /home/jenkins
	I0731 19:46:52.220864  139843 main.go:141] libmachine: (ha-235073-m02) DBG | Checking permissions on dir: /home
	I0731 19:46:52.220874  139843 main.go:141] libmachine: (ha-235073-m02) DBG | Skipping /home - not owner
	I0731 19:46:52.221756  139843 main.go:141] libmachine: (ha-235073-m02) define libvirt domain using xml: 
	I0731 19:46:52.221778  139843 main.go:141] libmachine: (ha-235073-m02) <domain type='kvm'>
	I0731 19:46:52.221789  139843 main.go:141] libmachine: (ha-235073-m02)   <name>ha-235073-m02</name>
	I0731 19:46:52.221796  139843 main.go:141] libmachine: (ha-235073-m02)   <memory unit='MiB'>2200</memory>
	I0731 19:46:52.221807  139843 main.go:141] libmachine: (ha-235073-m02)   <vcpu>2</vcpu>
	I0731 19:46:52.221813  139843 main.go:141] libmachine: (ha-235073-m02)   <features>
	I0731 19:46:52.221823  139843 main.go:141] libmachine: (ha-235073-m02)     <acpi/>
	I0731 19:46:52.221829  139843 main.go:141] libmachine: (ha-235073-m02)     <apic/>
	I0731 19:46:52.221840  139843 main.go:141] libmachine: (ha-235073-m02)     <pae/>
	I0731 19:46:52.221850  139843 main.go:141] libmachine: (ha-235073-m02)     
	I0731 19:46:52.221870  139843 main.go:141] libmachine: (ha-235073-m02)   </features>
	I0731 19:46:52.221887  139843 main.go:141] libmachine: (ha-235073-m02)   <cpu mode='host-passthrough'>
	I0731 19:46:52.221896  139843 main.go:141] libmachine: (ha-235073-m02)   
	I0731 19:46:52.221901  139843 main.go:141] libmachine: (ha-235073-m02)   </cpu>
	I0731 19:46:52.221912  139843 main.go:141] libmachine: (ha-235073-m02)   <os>
	I0731 19:46:52.221922  139843 main.go:141] libmachine: (ha-235073-m02)     <type>hvm</type>
	I0731 19:46:52.221931  139843 main.go:141] libmachine: (ha-235073-m02)     <boot dev='cdrom'/>
	I0731 19:46:52.221940  139843 main.go:141] libmachine: (ha-235073-m02)     <boot dev='hd'/>
	I0731 19:46:52.221952  139843 main.go:141] libmachine: (ha-235073-m02)     <bootmenu enable='no'/>
	I0731 19:46:52.221968  139843 main.go:141] libmachine: (ha-235073-m02)   </os>
	I0731 19:46:52.222014  139843 main.go:141] libmachine: (ha-235073-m02)   <devices>
	I0731 19:46:52.222038  139843 main.go:141] libmachine: (ha-235073-m02)     <disk type='file' device='cdrom'>
	I0731 19:46:52.222051  139843 main.go:141] libmachine: (ha-235073-m02)       <source file='/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02/boot2docker.iso'/>
	I0731 19:46:52.222061  139843 main.go:141] libmachine: (ha-235073-m02)       <target dev='hdc' bus='scsi'/>
	I0731 19:46:52.222071  139843 main.go:141] libmachine: (ha-235073-m02)       <readonly/>
	I0731 19:46:52.222082  139843 main.go:141] libmachine: (ha-235073-m02)     </disk>
	I0731 19:46:52.222094  139843 main.go:141] libmachine: (ha-235073-m02)     <disk type='file' device='disk'>
	I0731 19:46:52.222106  139843 main.go:141] libmachine: (ha-235073-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 19:46:52.222142  139843 main.go:141] libmachine: (ha-235073-m02)       <source file='/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02/ha-235073-m02.rawdisk'/>
	I0731 19:46:52.222167  139843 main.go:141] libmachine: (ha-235073-m02)       <target dev='hda' bus='virtio'/>
	I0731 19:46:52.222179  139843 main.go:141] libmachine: (ha-235073-m02)     </disk>
	I0731 19:46:52.222192  139843 main.go:141] libmachine: (ha-235073-m02)     <interface type='network'>
	I0731 19:46:52.222206  139843 main.go:141] libmachine: (ha-235073-m02)       <source network='mk-ha-235073'/>
	I0731 19:46:52.222218  139843 main.go:141] libmachine: (ha-235073-m02)       <model type='virtio'/>
	I0731 19:46:52.222230  139843 main.go:141] libmachine: (ha-235073-m02)     </interface>
	I0731 19:46:52.222241  139843 main.go:141] libmachine: (ha-235073-m02)     <interface type='network'>
	I0731 19:46:52.222252  139843 main.go:141] libmachine: (ha-235073-m02)       <source network='default'/>
	I0731 19:46:52.222263  139843 main.go:141] libmachine: (ha-235073-m02)       <model type='virtio'/>
	I0731 19:46:52.222282  139843 main.go:141] libmachine: (ha-235073-m02)     </interface>
	I0731 19:46:52.222301  139843 main.go:141] libmachine: (ha-235073-m02)     <serial type='pty'>
	I0731 19:46:52.222314  139843 main.go:141] libmachine: (ha-235073-m02)       <target port='0'/>
	I0731 19:46:52.222324  139843 main.go:141] libmachine: (ha-235073-m02)     </serial>
	I0731 19:46:52.222336  139843 main.go:141] libmachine: (ha-235073-m02)     <console type='pty'>
	I0731 19:46:52.222348  139843 main.go:141] libmachine: (ha-235073-m02)       <target type='serial' port='0'/>
	I0731 19:46:52.222376  139843 main.go:141] libmachine: (ha-235073-m02)     </console>
	I0731 19:46:52.222394  139843 main.go:141] libmachine: (ha-235073-m02)     <rng model='virtio'>
	I0731 19:46:52.222411  139843 main.go:141] libmachine: (ha-235073-m02)       <backend model='random'>/dev/random</backend>
	I0731 19:46:52.222431  139843 main.go:141] libmachine: (ha-235073-m02)     </rng>
	I0731 19:46:52.222444  139843 main.go:141] libmachine: (ha-235073-m02)     
	I0731 19:46:52.222454  139843 main.go:141] libmachine: (ha-235073-m02)     
	I0731 19:46:52.222466  139843 main.go:141] libmachine: (ha-235073-m02)   </devices>
	I0731 19:46:52.222477  139843 main.go:141] libmachine: (ha-235073-m02) </domain>
	I0731 19:46:52.222488  139843 main.go:141] libmachine: (ha-235073-m02) 
	I0731 19:46:52.229071  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:76:68:c2 in network default
	I0731 19:46:52.229666  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:46:52.229684  139843 main.go:141] libmachine: (ha-235073-m02) Ensuring networks are active...
	I0731 19:46:52.230358  139843 main.go:141] libmachine: (ha-235073-m02) Ensuring network default is active
	I0731 19:46:52.230702  139843 main.go:141] libmachine: (ha-235073-m02) Ensuring network mk-ha-235073 is active
	I0731 19:46:52.231096  139843 main.go:141] libmachine: (ha-235073-m02) Getting domain xml...
	I0731 19:46:52.231766  139843 main.go:141] libmachine: (ha-235073-m02) Creating domain...
	I0731 19:46:53.412056  139843 main.go:141] libmachine: (ha-235073-m02) Waiting to get IP...
	I0731 19:46:53.412873  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:46:53.413194  139843 main.go:141] libmachine: (ha-235073-m02) DBG | unable to find current IP address of domain ha-235073-m02 in network mk-ha-235073
	I0731 19:46:53.413212  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:46:53.413189  140223 retry.go:31] will retry after 312.469495ms: waiting for machine to come up
	I0731 19:46:53.727499  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:46:53.727975  139843 main.go:141] libmachine: (ha-235073-m02) DBG | unable to find current IP address of domain ha-235073-m02 in network mk-ha-235073
	I0731 19:46:53.728008  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:46:53.727940  140223 retry.go:31] will retry after 369.713539ms: waiting for machine to come up
	I0731 19:46:54.099438  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:46:54.099870  139843 main.go:141] libmachine: (ha-235073-m02) DBG | unable to find current IP address of domain ha-235073-m02 in network mk-ha-235073
	I0731 19:46:54.099899  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:46:54.099836  140223 retry.go:31] will retry after 359.388499ms: waiting for machine to come up
	I0731 19:46:54.461310  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:46:54.461862  139843 main.go:141] libmachine: (ha-235073-m02) DBG | unable to find current IP address of domain ha-235073-m02 in network mk-ha-235073
	I0731 19:46:54.461892  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:46:54.461817  140223 retry.go:31] will retry after 581.689874ms: waiting for machine to come up
	I0731 19:46:55.045760  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:46:55.046207  139843 main.go:141] libmachine: (ha-235073-m02) DBG | unable to find current IP address of domain ha-235073-m02 in network mk-ha-235073
	I0731 19:46:55.046235  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:46:55.046171  140223 retry.go:31] will retry after 622.054876ms: waiting for machine to come up
	I0731 19:46:55.670059  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:46:55.670452  139843 main.go:141] libmachine: (ha-235073-m02) DBG | unable to find current IP address of domain ha-235073-m02 in network mk-ha-235073
	I0731 19:46:55.670479  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:46:55.670410  140223 retry.go:31] will retry after 810.839747ms: waiting for machine to come up
	I0731 19:46:56.482516  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:46:56.482857  139843 main.go:141] libmachine: (ha-235073-m02) DBG | unable to find current IP address of domain ha-235073-m02 in network mk-ha-235073
	I0731 19:46:56.482883  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:46:56.482825  140223 retry.go:31] will retry after 1.105583581s: waiting for machine to come up
	I0731 19:46:57.590408  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:46:57.590800  139843 main.go:141] libmachine: (ha-235073-m02) DBG | unable to find current IP address of domain ha-235073-m02 in network mk-ha-235073
	I0731 19:46:57.590830  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:46:57.590749  140223 retry.go:31] will retry after 1.461697958s: waiting for machine to come up
	I0731 19:46:59.054527  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:46:59.054908  139843 main.go:141] libmachine: (ha-235073-m02) DBG | unable to find current IP address of domain ha-235073-m02 in network mk-ha-235073
	I0731 19:46:59.054937  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:46:59.054861  140223 retry.go:31] will retry after 1.153075906s: waiting for machine to come up
	I0731 19:47:00.209551  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:00.210027  139843 main.go:141] libmachine: (ha-235073-m02) DBG | unable to find current IP address of domain ha-235073-m02 in network mk-ha-235073
	I0731 19:47:00.210057  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:47:00.209979  140223 retry.go:31] will retry after 1.436509555s: waiting for machine to come up
	I0731 19:47:01.648504  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:01.649027  139843 main.go:141] libmachine: (ha-235073-m02) DBG | unable to find current IP address of domain ha-235073-m02 in network mk-ha-235073
	I0731 19:47:01.649055  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:47:01.648965  140223 retry.go:31] will retry after 1.954522866s: waiting for machine to come up
	I0731 19:47:03.605798  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:03.606255  139843 main.go:141] libmachine: (ha-235073-m02) DBG | unable to find current IP address of domain ha-235073-m02 in network mk-ha-235073
	I0731 19:47:03.606278  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:47:03.606211  140223 retry.go:31] will retry after 2.813375548s: waiting for machine to come up
	I0731 19:47:06.422537  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:06.422994  139843 main.go:141] libmachine: (ha-235073-m02) DBG | unable to find current IP address of domain ha-235073-m02 in network mk-ha-235073
	I0731 19:47:06.423023  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:47:06.422955  140223 retry.go:31] will retry after 3.497609634s: waiting for machine to come up
	I0731 19:47:09.924629  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:09.925033  139843 main.go:141] libmachine: (ha-235073-m02) DBG | unable to find current IP address of domain ha-235073-m02 in network mk-ha-235073
	I0731 19:47:09.925058  139843 main.go:141] libmachine: (ha-235073-m02) DBG | I0731 19:47:09.924981  140223 retry.go:31] will retry after 4.532256157s: waiting for machine to come up
	I0731 19:47:14.460269  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:14.460705  139843 main.go:141] libmachine: (ha-235073-m02) Found IP for machine: 192.168.39.102
	I0731 19:47:14.460741  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has current primary IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:14.460753  139843 main.go:141] libmachine: (ha-235073-m02) Reserving static IP address...
	I0731 19:47:14.461076  139843 main.go:141] libmachine: (ha-235073-m02) DBG | unable to find host DHCP lease matching {name: "ha-235073-m02", mac: "52:54:00:41:fe:7b", ip: "192.168.39.102"} in network mk-ha-235073
	I0731 19:47:14.531501  139843 main.go:141] libmachine: (ha-235073-m02) DBG | Getting to WaitForSSH function...
	I0731 19:47:14.531532  139843 main.go:141] libmachine: (ha-235073-m02) Reserved static IP address: 192.168.39.102
	I0731 19:47:14.531546  139843 main.go:141] libmachine: (ha-235073-m02) Waiting for SSH to be available...
	I0731 19:47:14.534237  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:14.534668  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:minikube Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:14.534697  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:14.534857  139843 main.go:141] libmachine: (ha-235073-m02) DBG | Using SSH client type: external
	I0731 19:47:14.534869  139843 main.go:141] libmachine: (ha-235073-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02/id_rsa (-rw-------)
	I0731 19:47:14.535498  139843 main.go:141] libmachine: (ha-235073-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 19:47:14.535522  139843 main.go:141] libmachine: (ha-235073-m02) DBG | About to run SSH command:
	I0731 19:47:14.535536  139843 main.go:141] libmachine: (ha-235073-m02) DBG | exit 0
	I0731 19:47:14.661583  139843 main.go:141] libmachine: (ha-235073-m02) DBG | SSH cmd err, output: <nil>: 
	I0731 19:47:14.661865  139843 main.go:141] libmachine: (ha-235073-m02) KVM machine creation complete!
	I0731 19:47:14.662138  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetConfigRaw
	I0731 19:47:14.662744  139843 main.go:141] libmachine: (ha-235073-m02) Calling .DriverName
	I0731 19:47:14.662949  139843 main.go:141] libmachine: (ha-235073-m02) Calling .DriverName
	I0731 19:47:14.663161  139843 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 19:47:14.663190  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetState
	I0731 19:47:14.664499  139843 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 19:47:14.664514  139843 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 19:47:14.664535  139843 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 19:47:14.664544  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHHostname
	I0731 19:47:14.666950  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:14.667276  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:14.667315  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:14.667448  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHPort
	I0731 19:47:14.667637  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:47:14.667803  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:47:14.667930  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHUsername
	I0731 19:47:14.668058  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:47:14.668297  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0731 19:47:14.668314  139843 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 19:47:14.776780  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 19:47:14.776808  139843 main.go:141] libmachine: Detecting the provisioner...
	I0731 19:47:14.776818  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHHostname
	I0731 19:47:14.779592  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:14.779963  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:14.779992  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:14.780187  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHPort
	I0731 19:47:14.780399  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:47:14.780571  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:47:14.780705  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHUsername
	I0731 19:47:14.780864  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:47:14.781026  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0731 19:47:14.781039  139843 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 19:47:14.891250  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 19:47:14.891346  139843 main.go:141] libmachine: found compatible host: buildroot
	I0731 19:47:14.891363  139843 main.go:141] libmachine: Provisioning with buildroot...
	I0731 19:47:14.891377  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetMachineName
	I0731 19:47:14.891705  139843 buildroot.go:166] provisioning hostname "ha-235073-m02"
	I0731 19:47:14.891737  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetMachineName
	I0731 19:47:14.891942  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHHostname
	I0731 19:47:14.894788  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:14.895144  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:14.895178  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:14.895262  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHPort
	I0731 19:47:14.895458  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:47:14.895621  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:47:14.895817  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHUsername
	I0731 19:47:14.896067  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:47:14.896257  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0731 19:47:14.896273  139843 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-235073-m02 && echo "ha-235073-m02" | sudo tee /etc/hostname
	I0731 19:47:15.019121  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-235073-m02
	
	I0731 19:47:15.019146  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHHostname
	I0731 19:47:15.021653  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.021965  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:15.021984  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.022159  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHPort
	I0731 19:47:15.022340  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:47:15.022534  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:47:15.022690  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHUsername
	I0731 19:47:15.022874  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:47:15.023044  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0731 19:47:15.023059  139843 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-235073-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-235073-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-235073-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 19:47:15.142986  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 19:47:15.143022  139843 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 19:47:15.143040  139843 buildroot.go:174] setting up certificates
	I0731 19:47:15.143048  139843 provision.go:84] configureAuth start
	I0731 19:47:15.143057  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetMachineName
	I0731 19:47:15.143354  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetIP
	I0731 19:47:15.145989  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.146324  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:15.146355  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.146548  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHHostname
	I0731 19:47:15.148348  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.148760  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:15.148788  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.148854  139843 provision.go:143] copyHostCerts
	I0731 19:47:15.148891  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 19:47:15.148926  139843 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 19:47:15.148936  139843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 19:47:15.149023  139843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 19:47:15.149130  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 19:47:15.149159  139843 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 19:47:15.149166  139843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 19:47:15.149195  139843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 19:47:15.149246  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 19:47:15.149263  139843 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 19:47:15.149267  139843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 19:47:15.149288  139843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 19:47:15.149359  139843 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.ha-235073-m02 san=[127.0.0.1 192.168.39.102 ha-235073-m02 localhost minikube]
	I0731 19:47:15.254916  139843 provision.go:177] copyRemoteCerts
	I0731 19:47:15.254975  139843 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 19:47:15.255001  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHHostname
	I0731 19:47:15.257781  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.258110  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:15.258130  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.258329  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHPort
	I0731 19:47:15.258509  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:47:15.258634  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHUsername
	I0731 19:47:15.258743  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02/id_rsa Username:docker}
	I0731 19:47:15.343735  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 19:47:15.343811  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 19:47:15.368324  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 19:47:15.368461  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 19:47:15.391622  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 19:47:15.391688  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 19:47:15.414702  139843 provision.go:87] duration metric: took 271.638616ms to configureAuth
	I0731 19:47:15.414740  139843 buildroot.go:189] setting minikube options for container-runtime
	I0731 19:47:15.414917  139843 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:47:15.414997  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHHostname
	I0731 19:47:15.417430  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.417806  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:15.417835  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.417991  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHPort
	I0731 19:47:15.418205  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:47:15.418372  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:47:15.418526  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHUsername
	I0731 19:47:15.418656  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:47:15.418833  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0731 19:47:15.418853  139843 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 19:47:15.691749  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 19:47:15.691774  139843 main.go:141] libmachine: Checking connection to Docker...
	I0731 19:47:15.691781  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetURL
	I0731 19:47:15.693140  139843 main.go:141] libmachine: (ha-235073-m02) DBG | Using libvirt version 6000000
	I0731 19:47:15.695171  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.695499  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:15.695529  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.695742  139843 main.go:141] libmachine: Docker is up and running!
	I0731 19:47:15.695758  139843 main.go:141] libmachine: Reticulating splines...
	I0731 19:47:15.695768  139843 client.go:171] duration metric: took 23.845457271s to LocalClient.Create
	I0731 19:47:15.695796  139843 start.go:167] duration metric: took 23.845522725s to libmachine.API.Create "ha-235073"
	I0731 19:47:15.695808  139843 start.go:293] postStartSetup for "ha-235073-m02" (driver="kvm2")
	I0731 19:47:15.695822  139843 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 19:47:15.695847  139843 main.go:141] libmachine: (ha-235073-m02) Calling .DriverName
	I0731 19:47:15.696128  139843 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 19:47:15.696154  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHHostname
	I0731 19:47:15.698342  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.698651  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:15.698677  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.698830  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHPort
	I0731 19:47:15.699045  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:47:15.699174  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHUsername
	I0731 19:47:15.699309  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02/id_rsa Username:docker}
	I0731 19:47:15.785084  139843 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 19:47:15.789227  139843 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 19:47:15.789247  139843 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 19:47:15.789296  139843 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 19:47:15.789399  139843 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 19:47:15.789410  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> /etc/ssl/certs/1288912.pem
	I0731 19:47:15.789493  139843 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 19:47:15.799406  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 19:47:15.823179  139843 start.go:296] duration metric: took 127.354941ms for postStartSetup
	I0731 19:47:15.823231  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetConfigRaw
	I0731 19:47:15.823808  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetIP
	I0731 19:47:15.826281  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.826625  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:15.826651  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.826861  139843 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/config.json ...
	I0731 19:47:15.827051  139843 start.go:128] duration metric: took 23.994752429s to createHost
	I0731 19:47:15.827073  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHHostname
	I0731 19:47:15.829161  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.829509  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:15.829548  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.829701  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHPort
	I0731 19:47:15.829906  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:47:15.830042  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:47:15.830178  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHUsername
	I0731 19:47:15.830298  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:47:15.830470  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0731 19:47:15.830481  139843 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 19:47:15.938133  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722455235.914439840
	
	I0731 19:47:15.938157  139843 fix.go:216] guest clock: 1722455235.914439840
	I0731 19:47:15.938171  139843 fix.go:229] Guest: 2024-07-31 19:47:15.91443984 +0000 UTC Remote: 2024-07-31 19:47:15.827062034 +0000 UTC m=+77.635992638 (delta=87.377806ms)
	I0731 19:47:15.938192  139843 fix.go:200] guest clock delta is within tolerance: 87.377806ms
	I0731 19:47:15.938200  139843 start.go:83] releasing machines lock for "ha-235073-m02", held for 24.105988261s
	I0731 19:47:15.938242  139843 main.go:141] libmachine: (ha-235073-m02) Calling .DriverName
	I0731 19:47:15.938558  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetIP
	I0731 19:47:15.941151  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.941543  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:15.941571  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.944039  139843 out.go:177] * Found network options:
	I0731 19:47:15.945397  139843 out.go:177]   - NO_PROXY=192.168.39.146
	W0731 19:47:15.946608  139843 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 19:47:15.946636  139843 main.go:141] libmachine: (ha-235073-m02) Calling .DriverName
	I0731 19:47:15.947197  139843 main.go:141] libmachine: (ha-235073-m02) Calling .DriverName
	I0731 19:47:15.947386  139843 main.go:141] libmachine: (ha-235073-m02) Calling .DriverName
	I0731 19:47:15.947523  139843 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 19:47:15.947566  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHHostname
	W0731 19:47:15.947661  139843 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 19:47:15.947756  139843 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 19:47:15.947778  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHHostname
	I0731 19:47:15.950389  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.950650  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.950747  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:15.950776  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.950902  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHPort
	I0731 19:47:15.951004  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:15.951039  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:15.951085  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:47:15.951174  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHPort
	I0731 19:47:15.951258  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHUsername
	I0731 19:47:15.951323  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 19:47:15.951396  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02/id_rsa Username:docker}
	I0731 19:47:15.951453  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHUsername
	I0731 19:47:15.951592  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02/id_rsa Username:docker}
	I0731 19:47:16.185751  139843 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 19:47:16.192023  139843 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 19:47:16.192099  139843 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 19:47:16.207389  139843 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 19:47:16.207421  139843 start.go:495] detecting cgroup driver to use...
	I0731 19:47:16.207507  139843 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 19:47:16.224133  139843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 19:47:16.238012  139843 docker.go:217] disabling cri-docker service (if available) ...
	I0731 19:47:16.238072  139843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 19:47:16.251964  139843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 19:47:16.265421  139843 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 19:47:16.396523  139843 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 19:47:16.539052  139843 docker.go:233] disabling docker service ...
	I0731 19:47:16.539153  139843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 19:47:16.553387  139843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 19:47:16.565915  139843 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 19:47:16.698838  139843 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 19:47:16.808112  139843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 19:47:16.821882  139843 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 19:47:16.839748  139843 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 19:47:16.839803  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:47:16.849843  139843 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 19:47:16.849902  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:47:16.860126  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:47:16.870007  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:47:16.880142  139843 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 19:47:16.890299  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:47:16.900152  139843 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:47:16.919452  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:47:16.929097  139843 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 19:47:16.938160  139843 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 19:47:16.938224  139843 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 19:47:16.950582  139843 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 19:47:16.960358  139843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:47:17.071168  139843 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 19:47:17.207181  139843 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 19:47:17.207270  139843 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 19:47:17.212016  139843 start.go:563] Will wait 60s for crictl version
	I0731 19:47:17.212075  139843 ssh_runner.go:195] Run: which crictl
	I0731 19:47:17.215671  139843 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 19:47:17.254175  139843 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 19:47:17.254261  139843 ssh_runner.go:195] Run: crio --version
	I0731 19:47:17.281681  139843 ssh_runner.go:195] Run: crio --version
	I0731 19:47:17.313016  139843 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 19:47:17.314286  139843 out.go:177]   - env NO_PROXY=192.168.39.146
	I0731 19:47:17.315349  139843 main.go:141] libmachine: (ha-235073-m02) Calling .GetIP
	I0731 19:47:17.317820  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:17.318162  139843 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:47:06 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 19:47:17.318192  139843 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 19:47:17.318308  139843 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 19:47:17.322441  139843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 19:47:17.334564  139843 mustload.go:65] Loading cluster: ha-235073
	I0731 19:47:17.334755  139843 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:47:17.335089  139843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:47:17.335139  139843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:47:17.349535  139843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34865
	I0731 19:47:17.349972  139843 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:47:17.350392  139843 main.go:141] libmachine: Using API Version  1
	I0731 19:47:17.350413  139843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:47:17.350744  139843 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:47:17.350931  139843 main.go:141] libmachine: (ha-235073) Calling .GetState
	I0731 19:47:17.352528  139843 host.go:66] Checking if "ha-235073" exists ...
	I0731 19:47:17.352808  139843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:47:17.352840  139843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:47:17.367108  139843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37877
	I0731 19:47:17.367497  139843 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:47:17.367913  139843 main.go:141] libmachine: Using API Version  1
	I0731 19:47:17.367932  139843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:47:17.368270  139843 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:47:17.368442  139843 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:47:17.368586  139843 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073 for IP: 192.168.39.102
	I0731 19:47:17.368598  139843 certs.go:194] generating shared ca certs ...
	I0731 19:47:17.368613  139843 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:47:17.368729  139843 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 19:47:17.368765  139843 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 19:47:17.368774  139843 certs.go:256] generating profile certs ...
	I0731 19:47:17.368842  139843 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/client.key
	I0731 19:47:17.368866  139843 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key.9f43a361
	I0731 19:47:17.368880  139843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt.9f43a361 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.146 192.168.39.102 192.168.39.254]
	I0731 19:47:17.455057  139843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt.9f43a361 ...
	I0731 19:47:17.455086  139843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt.9f43a361: {Name:mkf6dee4ca9d5bbdb847f1e93802c1d5fc8eb860 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:47:17.455250  139843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key.9f43a361 ...
	I0731 19:47:17.455268  139843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key.9f43a361: {Name:mk97519bc18e642aa64f8384b86a970446bea27e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:47:17.455378  139843 certs.go:381] copying /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt.9f43a361 -> /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt
	I0731 19:47:17.455521  139843 certs.go:385] copying /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key.9f43a361 -> /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key
	I0731 19:47:17.455646  139843 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.key
	I0731 19:47:17.455662  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 19:47:17.455676  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 19:47:17.455690  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 19:47:17.455708  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 19:47:17.455720  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 19:47:17.455732  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 19:47:17.455744  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 19:47:17.455757  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 19:47:17.455803  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 19:47:17.455830  139843 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 19:47:17.455840  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 19:47:17.455860  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 19:47:17.455901  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 19:47:17.455922  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 19:47:17.455990  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 19:47:17.456021  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem -> /usr/share/ca-certificates/128891.pem
	I0731 19:47:17.456035  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> /usr/share/ca-certificates/1288912.pem
	I0731 19:47:17.456048  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:47:17.456080  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:47:17.458766  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:47:17.459214  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:47:17.459242  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:47:17.459403  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:47:17.459578  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:47:17.459739  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:47:17.459871  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:47:17.529761  139843 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0731 19:47:17.534900  139843 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0731 19:47:17.545715  139843 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0731 19:47:17.549918  139843 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0731 19:47:17.559479  139843 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0731 19:47:17.563390  139843 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0731 19:47:17.572999  139843 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0731 19:47:17.576983  139843 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0731 19:47:17.586912  139843 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0731 19:47:17.590976  139843 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0731 19:47:17.602229  139843 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0731 19:47:17.606397  139843 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0731 19:47:17.616408  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 19:47:17.644649  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 19:47:17.671633  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 19:47:17.698471  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 19:47:17.725427  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0731 19:47:17.751908  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 19:47:17.778535  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 19:47:17.803064  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 19:47:17.826956  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 19:47:17.853331  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 19:47:17.879425  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 19:47:17.902803  139843 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0731 19:47:17.919217  139843 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0731 19:47:17.935498  139843 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0731 19:47:17.951571  139843 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0731 19:47:17.967478  139843 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0731 19:47:17.983153  139843 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0731 19:47:17.999005  139843 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0731 19:47:18.015041  139843 ssh_runner.go:195] Run: openssl version
	I0731 19:47:18.020704  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 19:47:18.031290  139843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:47:18.035786  139843 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:47:18.035845  139843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:47:18.041498  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 19:47:18.051926  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 19:47:18.062614  139843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 19:47:18.067024  139843 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 19:47:18.067085  139843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 19:47:18.072613  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 19:47:18.082720  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 19:47:18.093102  139843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 19:47:18.097297  139843 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 19:47:18.097500  139843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 19:47:18.102999  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 19:47:18.113248  139843 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 19:47:18.116995  139843 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 19:47:18.117040  139843 kubeadm.go:934] updating node {m02 192.168.39.102 8443 v1.30.3 crio true true} ...
	I0731 19:47:18.117121  139843 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-235073-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-235073 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 19:47:18.117143  139843 kube-vip.go:115] generating kube-vip config ...
	I0731 19:47:18.117170  139843 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 19:47:18.131937  139843 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 19:47:18.131992  139843 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 19:47:18.132031  139843 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 19:47:18.140996  139843 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0731 19:47:18.141038  139843 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0731 19:47:18.149761  139843 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0731 19:47:18.149782  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 19:47:18.149832  139843 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 19:47:18.149913  139843 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19355-121704/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0731 19:47:18.149941  139843 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19355-121704/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0731 19:47:18.154681  139843 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0731 19:47:18.154703  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0731 19:47:51.221441  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 19:47:51.221532  139843 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 19:47:51.227059  139843 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0731 19:47:51.227093  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0731 19:48:27.139335  139843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:48:27.155160  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 19:48:27.155256  139843 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 19:48:27.159527  139843 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0731 19:48:27.159564  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0731 19:48:27.530005  139843 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0731 19:48:27.539327  139843 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0731 19:48:27.555689  139843 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 19:48:27.571458  139843 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0731 19:48:27.587456  139843 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 19:48:27.591145  139843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 19:48:27.602557  139843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:48:27.727589  139843 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 19:48:27.744201  139843 host.go:66] Checking if "ha-235073" exists ...
	I0731 19:48:27.744588  139843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:48:27.744648  139843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:48:27.759789  139843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43675
	I0731 19:48:27.760358  139843 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:48:27.760862  139843 main.go:141] libmachine: Using API Version  1
	I0731 19:48:27.760886  139843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:48:27.761206  139843 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:48:27.761439  139843 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:48:27.761608  139843 start.go:317] joinCluster: &{Name:ha-235073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-235073 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:48:27.761732  139843 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0731 19:48:27.761754  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:48:27.764780  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:48:27.765241  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:48:27.765269  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:48:27.765397  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:48:27.765564  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:48:27.765724  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:48:27.765866  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:48:27.936264  139843 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 19:48:27.936345  139843 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token z9kd5i.x3x4iu01r1g1k8ha --discovery-token-ca-cert-hash sha256:227a67c5218b331dd5f7132ea28d97c9a8049ca33dc9adbb2077964e102f3559 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-235073-m02 --control-plane --apiserver-advertise-address=192.168.39.102 --apiserver-bind-port=8443"
	I0731 19:48:48.949241  139843 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token z9kd5i.x3x4iu01r1g1k8ha --discovery-token-ca-cert-hash sha256:227a67c5218b331dd5f7132ea28d97c9a8049ca33dc9adbb2077964e102f3559 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-235073-m02 --control-plane --apiserver-advertise-address=192.168.39.102 --apiserver-bind-port=8443": (21.012858968s)
	I0731 19:48:48.949285  139843 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0731 19:48:49.508600  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-235073-m02 minikube.k8s.io/updated_at=2024_07_31T19_48_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825 minikube.k8s.io/name=ha-235073 minikube.k8s.io/primary=false
	I0731 19:48:49.663865  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-235073-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0731 19:48:49.802858  139843 start.go:319] duration metric: took 22.041241164s to joinCluster
	I0731 19:48:49.802957  139843 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 19:48:49.803266  139843 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:48:49.804022  139843 out.go:177] * Verifying Kubernetes components...
	I0731 19:48:49.805010  139843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:48:50.074819  139843 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 19:48:50.181665  139843 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 19:48:50.181931  139843 kapi.go:59] client config for ha-235073: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/client.crt", KeyFile:"/home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/client.key", CAFile:"/home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0731 19:48:50.181996  139843 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.146:8443
	I0731 19:48:50.182233  139843 node_ready.go:35] waiting up to 6m0s for node "ha-235073-m02" to be "Ready" ...
	I0731 19:48:50.182335  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:50.182345  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:50.182356  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:50.182363  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:50.191663  139843 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0731 19:48:50.682793  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:50.682821  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:50.682833  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:50.682838  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:50.687341  139843 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 19:48:51.182541  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:51.182570  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:51.182582  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:51.182587  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:51.187103  139843 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 19:48:51.682921  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:51.682943  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:51.682953  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:51.682957  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:51.685841  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:48:52.182664  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:52.182700  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:52.182712  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:52.182718  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:52.186372  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:48:52.187079  139843 node_ready.go:53] node "ha-235073-m02" has status "Ready":"False"
	I0731 19:48:52.683294  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:52.683316  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:52.683325  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:52.683329  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:52.686398  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:48:53.182470  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:53.182492  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:53.182501  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:53.182506  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:53.185744  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:48:53.682762  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:53.682785  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:53.682794  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:53.682798  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:53.686061  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:48:54.183148  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:54.183174  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:54.183184  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:54.183187  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:54.186602  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:48:54.187763  139843 node_ready.go:53] node "ha-235073-m02" has status "Ready":"False"
	I0731 19:48:54.683000  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:54.683025  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:54.683035  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:54.683040  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:54.687691  139843 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 19:48:55.183253  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:55.183282  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:55.183295  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:55.183301  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:55.192413  139843 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0731 19:48:55.682478  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:55.682500  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:55.682508  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:55.682512  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:55.685757  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:48:56.182832  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:56.182855  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:56.182864  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:56.182868  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:56.188401  139843 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 19:48:56.189535  139843 node_ready.go:53] node "ha-235073-m02" has status "Ready":"False"
	I0731 19:48:56.682662  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:56.682693  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:56.682704  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:56.682709  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:56.685692  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:48:57.183288  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:57.183311  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:57.183319  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:57.183323  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:57.186635  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:48:57.683267  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:57.683294  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:57.683306  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:57.683313  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:57.686725  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:48:58.183235  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:58.183258  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:58.183267  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:58.183275  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:58.186592  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:48:58.682467  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:58.682496  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:58.682506  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:58.682510  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:58.685493  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:48:58.686033  139843 node_ready.go:53] node "ha-235073-m02" has status "Ready":"False"
	I0731 19:48:59.183427  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:59.183452  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:59.183461  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:59.183467  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:59.186875  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:48:59.682808  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:48:59.682833  139843 round_trippers.go:469] Request Headers:
	I0731 19:48:59.682844  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:48:59.682853  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:48:59.686486  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:00.182531  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:00.182555  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:00.182568  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:00.182577  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:00.185757  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:00.682480  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:00.682502  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:00.682510  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:00.682514  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:00.686623  139843 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 19:49:00.687179  139843 node_ready.go:53] node "ha-235073-m02" has status "Ready":"False"
	I0731 19:49:01.182674  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:01.182697  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:01.182705  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:01.182709  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:01.185990  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:01.682578  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:01.682601  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:01.682610  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:01.682614  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:01.685844  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:02.182527  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:02.182551  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:02.182562  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:02.182568  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:02.185966  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:02.683121  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:02.683145  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:02.683155  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:02.683162  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:02.686333  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:03.182831  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:03.182854  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:03.182863  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:03.182867  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:03.188529  139843 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 19:49:03.189001  139843 node_ready.go:53] node "ha-235073-m02" has status "Ready":"False"
	I0731 19:49:03.682842  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:03.682868  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:03.682877  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:03.682883  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:03.686425  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:04.182502  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:04.182528  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:04.182539  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:04.182545  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:04.185626  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:04.682475  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:04.682505  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:04.682514  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:04.682517  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:04.685643  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:05.182936  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:05.182964  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:05.182976  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:05.182982  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:05.188162  139843 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 19:49:05.683270  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:05.683293  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:05.683301  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:05.683306  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:05.686816  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:05.687348  139843 node_ready.go:53] node "ha-235073-m02" has status "Ready":"False"
	I0731 19:49:06.182631  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:06.182660  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:06.182671  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:06.182677  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:06.186152  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:06.682905  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:06.682930  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:06.682940  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:06.682945  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:06.686236  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:07.182723  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:07.182748  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:07.182757  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:07.182760  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:07.185925  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:07.682446  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:07.682472  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:07.682484  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:07.682489  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:07.686012  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:08.183448  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:08.183471  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:08.183487  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:08.183494  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:08.186821  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:08.187289  139843 node_ready.go:53] node "ha-235073-m02" has status "Ready":"False"
	I0731 19:49:08.683467  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:08.683494  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:08.683506  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:08.683510  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:08.688092  139843 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 19:49:08.688619  139843 node_ready.go:49] node "ha-235073-m02" has status "Ready":"True"
	I0731 19:49:08.688638  139843 node_ready.go:38] duration metric: took 18.506386927s for node "ha-235073-m02" to be "Ready" ...
	I0731 19:49:08.688649  139843 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 19:49:08.688757  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods
	I0731 19:49:08.688769  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:08.688779  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:08.688784  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:08.697786  139843 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0731 19:49:08.704037  139843 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-d2w7q" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:08.704140  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-d2w7q
	I0731 19:49:08.704150  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:08.704161  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:08.704166  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:08.707321  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:08.707967  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:49:08.707981  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:08.707992  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:08.707999  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:08.710402  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:49:08.711104  139843 pod_ready.go:92] pod "coredns-7db6d8ff4d-d2w7q" in "kube-system" namespace has status "Ready":"True"
	I0731 19:49:08.711120  139843 pod_ready.go:81] duration metric: took 7.059182ms for pod "coredns-7db6d8ff4d-d2w7q" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:08.711128  139843 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-f7dzt" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:08.711186  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-f7dzt
	I0731 19:49:08.711194  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:08.711201  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:08.711205  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:08.713629  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:49:08.714392  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:49:08.714406  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:08.714415  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:08.714421  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:08.716417  139843 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 19:49:08.716975  139843 pod_ready.go:92] pod "coredns-7db6d8ff4d-f7dzt" in "kube-system" namespace has status "Ready":"True"
	I0731 19:49:08.716988  139843 pod_ready.go:81] duration metric: took 5.853322ms for pod "coredns-7db6d8ff4d-f7dzt" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:08.716996  139843 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-235073" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:08.717042  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/etcd-ha-235073
	I0731 19:49:08.717049  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:08.717055  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:08.717061  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:08.719192  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:49:08.719747  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:49:08.719759  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:08.719766  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:08.719769  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:08.721906  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:49:08.722367  139843 pod_ready.go:92] pod "etcd-ha-235073" in "kube-system" namespace has status "Ready":"True"
	I0731 19:49:08.722382  139843 pod_ready.go:81] duration metric: took 5.378826ms for pod "etcd-ha-235073" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:08.722389  139843 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-235073-m02" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:08.722444  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/etcd-ha-235073-m02
	I0731 19:49:08.722452  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:08.722459  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:08.722465  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:08.724963  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:49:08.725586  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:08.725599  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:08.725609  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:08.725615  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:08.728137  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:49:08.728690  139843 pod_ready.go:92] pod "etcd-ha-235073-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 19:49:08.728705  139843 pod_ready.go:81] duration metric: took 6.304389ms for pod "etcd-ha-235073-m02" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:08.728722  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-235073" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:08.884091  139843 request.go:629] Waited for 155.305049ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-235073
	I0731 19:49:08.884168  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-235073
	I0731 19:49:08.884174  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:08.884181  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:08.884187  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:08.887154  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:49:09.084308  139843 request.go:629] Waited for 196.394242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:49:09.084406  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:49:09.084420  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:09.084435  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:09.084438  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:09.087812  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:09.088349  139843 pod_ready.go:92] pod "kube-apiserver-ha-235073" in "kube-system" namespace has status "Ready":"True"
	I0731 19:49:09.088366  139843 pod_ready.go:81] duration metric: took 359.636272ms for pod "kube-apiserver-ha-235073" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:09.088375  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-235073-m02" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:09.284497  139843 request.go:629] Waited for 196.040622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-235073-m02
	I0731 19:49:09.284573  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-235073-m02
	I0731 19:49:09.284581  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:09.284592  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:09.284597  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:09.287868  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:09.483936  139843 request.go:629] Waited for 195.401913ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:09.484018  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:09.484027  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:09.484035  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:09.484039  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:09.487009  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:49:09.487559  139843 pod_ready.go:92] pod "kube-apiserver-ha-235073-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 19:49:09.487579  139843 pod_ready.go:81] duration metric: took 399.197759ms for pod "kube-apiserver-ha-235073-m02" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:09.487589  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-235073" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:09.683559  139843 request.go:629] Waited for 195.899757ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-235073
	I0731 19:49:09.683621  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-235073
	I0731 19:49:09.683626  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:09.683633  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:09.683638  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:09.686902  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:09.883816  139843 request.go:629] Waited for 196.334103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:49:09.883901  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:49:09.883927  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:09.883943  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:09.883953  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:09.887473  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:09.888107  139843 pod_ready.go:92] pod "kube-controller-manager-ha-235073" in "kube-system" namespace has status "Ready":"True"
	I0731 19:49:09.888127  139843 pod_ready.go:81] duration metric: took 400.528979ms for pod "kube-controller-manager-ha-235073" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:09.888137  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-235073-m02" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:10.084326  139843 request.go:629] Waited for 196.105395ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-235073-m02
	I0731 19:49:10.084406  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-235073-m02
	I0731 19:49:10.084411  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:10.084419  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:10.084423  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:10.087939  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:10.284454  139843 request.go:629] Waited for 195.387188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:10.284515  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:10.284520  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:10.284527  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:10.284533  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:10.287832  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:10.288445  139843 pod_ready.go:92] pod "kube-controller-manager-ha-235073-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 19:49:10.288468  139843 pod_ready.go:81] duration metric: took 400.320918ms for pod "kube-controller-manager-ha-235073-m02" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:10.288480  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4g5ws" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:10.484479  139843 request.go:629] Waited for 195.907591ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4g5ws
	I0731 19:49:10.484548  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4g5ws
	I0731 19:49:10.484553  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:10.484561  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:10.484568  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:10.487734  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:10.683655  139843 request.go:629] Waited for 195.293136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:10.683735  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:10.683741  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:10.683749  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:10.683755  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:10.687449  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:10.688030  139843 pod_ready.go:92] pod "kube-proxy-4g5ws" in "kube-system" namespace has status "Ready":"True"
	I0731 19:49:10.688052  139843 pod_ready.go:81] duration metric: took 399.565448ms for pod "kube-proxy-4g5ws" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:10.688062  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-td8j2" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:10.884272  139843 request.go:629] Waited for 196.128002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-proxy-td8j2
	I0731 19:49:10.884374  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-proxy-td8j2
	I0731 19:49:10.884386  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:10.884397  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:10.884403  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:10.889281  139843 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 19:49:11.084223  139843 request.go:629] Waited for 194.075007ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:49:11.084294  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:49:11.084301  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:11.084312  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:11.084317  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:11.088127  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:11.088856  139843 pod_ready.go:92] pod "kube-proxy-td8j2" in "kube-system" namespace has status "Ready":"True"
	I0731 19:49:11.088878  139843 pod_ready.go:81] duration metric: took 400.81028ms for pod "kube-proxy-td8j2" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:11.088890  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-235073" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:11.283908  139843 request.go:629] Waited for 194.922818ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-235073
	I0731 19:49:11.283982  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-235073
	I0731 19:49:11.283991  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:11.283999  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:11.284009  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:11.287332  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:11.484291  139843 request.go:629] Waited for 196.398574ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:49:11.484376  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:49:11.484387  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:11.484420  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:11.484434  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:11.487627  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:11.488525  139843 pod_ready.go:92] pod "kube-scheduler-ha-235073" in "kube-system" namespace has status "Ready":"True"
	I0731 19:49:11.488542  139843 pod_ready.go:81] duration metric: took 399.646685ms for pod "kube-scheduler-ha-235073" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:11.488552  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-235073-m02" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:11.683616  139843 request.go:629] Waited for 194.979494ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-235073-m02
	I0731 19:49:11.683682  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-235073-m02
	I0731 19:49:11.683687  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:11.683694  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:11.683698  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:11.687195  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:11.884223  139843 request.go:629] Waited for 196.369854ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:11.884304  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:49:11.884309  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:11.884317  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:11.884323  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:11.887835  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:11.888327  139843 pod_ready.go:92] pod "kube-scheduler-ha-235073-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 19:49:11.888346  139843 pod_ready.go:81] duration metric: took 399.788033ms for pod "kube-scheduler-ha-235073-m02" in "kube-system" namespace to be "Ready" ...
	I0731 19:49:11.888359  139843 pod_ready.go:38] duration metric: took 3.199672771s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 19:49:11.888412  139843 api_server.go:52] waiting for apiserver process to appear ...
	I0731 19:49:11.888475  139843 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:49:11.904507  139843 api_server.go:72] duration metric: took 22.101506607s to wait for apiserver process to appear ...
	I0731 19:49:11.904533  139843 api_server.go:88] waiting for apiserver healthz status ...
	I0731 19:49:11.904555  139843 api_server.go:253] Checking apiserver healthz at https://192.168.39.146:8443/healthz ...
	I0731 19:49:11.908571  139843 api_server.go:279] https://192.168.39.146:8443/healthz returned 200:
	ok
	I0731 19:49:11.908648  139843 round_trippers.go:463] GET https://192.168.39.146:8443/version
	I0731 19:49:11.908660  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:11.908669  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:11.908676  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:11.909351  139843 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0731 19:49:11.909491  139843 api_server.go:141] control plane version: v1.30.3
	I0731 19:49:11.909510  139843 api_server.go:131] duration metric: took 4.971291ms to wait for apiserver health ...
	I0731 19:49:11.909517  139843 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 19:49:12.083986  139843 request.go:629] Waited for 174.378836ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods
	I0731 19:49:12.084066  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods
	I0731 19:49:12.084073  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:12.084087  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:12.084095  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:12.089366  139843 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 19:49:12.094198  139843 system_pods.go:59] 17 kube-system pods found
	I0731 19:49:12.094244  139843 system_pods.go:61] "coredns-7db6d8ff4d-d2w7q" [c47597b4-a38b-438c-9c3b-8f7f45130f75] Running
	I0731 19:49:12.094251  139843 system_pods.go:61] "coredns-7db6d8ff4d-f7dzt" [9549b5d7-bb23-4934-883b-dd07f8d864d8] Running
	I0731 19:49:12.094255  139843 system_pods.go:61] "etcd-ha-235073" [ef927139-ead6-413d-b0cd-beb931fc4700] Running
	I0731 19:49:12.094258  139843 system_pods.go:61] "etcd-ha-235073-m02" [2bc3b6c8-c8de-42c0-a752-302d07433ebc] Running
	I0731 19:49:12.094262  139843 system_pods.go:61] "kindnet-6mpsn" [1910ba34-e9d4-4fb4-9f2b-6b00dad3a3ef] Running
	I0731 19:49:12.094265  139843 system_pods.go:61] "kindnet-v5g92" [c8020666-5376-4bdf-a9a3-d10b67fc04a9] Running
	I0731 19:49:12.094268  139843 system_pods.go:61] "kube-apiserver-ha-235073" [c7da5168-cd07-4660-91a7-f25bf44db28e] Running
	I0731 19:49:12.094271  139843 system_pods.go:61] "kube-apiserver-ha-235073-m02" [bb498dc0-7bea-4f44-b6ea-0b66122d8205] Running
	I0731 19:49:12.094274  139843 system_pods.go:61] "kube-controller-manager-ha-235073" [1d7ad140-888f-4863-aa09-0651eae569a7] Running
	I0731 19:49:12.094278  139843 system_pods.go:61] "kube-controller-manager-ha-235073-m02" [7d1e23f4-1609-476f-b30e-1e18d291ca4c] Running
	I0731 19:49:12.094281  139843 system_pods.go:61] "kube-proxy-4g5ws" [681015ee-d7ba-460f-a593-0152df2b065d] Running
	I0731 19:49:12.094284  139843 system_pods.go:61] "kube-proxy-td8j2" [b836edfa-4df1-40e4-a58a-3f23afd5b78b] Running
	I0731 19:49:12.094287  139843 system_pods.go:61] "kube-scheduler-ha-235073" [597d51e9-b674-4b7f-b104-6e8808a5d593] Running
	I0731 19:49:12.094290  139843 system_pods.go:61] "kube-scheduler-ha-235073-m02" [84f686e7-4317-41b4-8064-621a7fa7ade8] Running
	I0731 19:49:12.094293  139843 system_pods.go:61] "kube-vip-ha-235073" [f28e113e-7c11-4a00-a8cb-fb5527042343] Running
	I0731 19:49:12.094296  139843 system_pods.go:61] "kube-vip-ha-235073-m02" [4f387765-627c-49e4-9fce-eae672099a6d] Running
	I0731 19:49:12.094299  139843 system_pods.go:61] "storage-provisioner" [9cd9bb70-badc-4b4b-a135-62644edac7dd] Running
	I0731 19:49:12.094307  139843 system_pods.go:74] duration metric: took 184.784656ms to wait for pod list to return data ...
	I0731 19:49:12.094318  139843 default_sa.go:34] waiting for default service account to be created ...
	I0731 19:49:12.283677  139843 request.go:629] Waited for 189.279048ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/default/serviceaccounts
	I0731 19:49:12.283743  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/default/serviceaccounts
	I0731 19:49:12.283754  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:12.283768  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:12.283775  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:12.286897  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:49:12.287132  139843 default_sa.go:45] found service account: "default"
	I0731 19:49:12.287149  139843 default_sa.go:55] duration metric: took 192.825253ms for default service account to be created ...
	I0731 19:49:12.287158  139843 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 19:49:12.484179  139843 request.go:629] Waited for 196.944899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods
	I0731 19:49:12.484243  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods
	I0731 19:49:12.484248  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:12.484264  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:12.484268  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:12.491731  139843 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 19:49:12.496731  139843 system_pods.go:86] 17 kube-system pods found
	I0731 19:49:12.496757  139843 system_pods.go:89] "coredns-7db6d8ff4d-d2w7q" [c47597b4-a38b-438c-9c3b-8f7f45130f75] Running
	I0731 19:49:12.496763  139843 system_pods.go:89] "coredns-7db6d8ff4d-f7dzt" [9549b5d7-bb23-4934-883b-dd07f8d864d8] Running
	I0731 19:49:12.496768  139843 system_pods.go:89] "etcd-ha-235073" [ef927139-ead6-413d-b0cd-beb931fc4700] Running
	I0731 19:49:12.496772  139843 system_pods.go:89] "etcd-ha-235073-m02" [2bc3b6c8-c8de-42c0-a752-302d07433ebc] Running
	I0731 19:49:12.496776  139843 system_pods.go:89] "kindnet-6mpsn" [1910ba34-e9d4-4fb4-9f2b-6b00dad3a3ef] Running
	I0731 19:49:12.496780  139843 system_pods.go:89] "kindnet-v5g92" [c8020666-5376-4bdf-a9a3-d10b67fc04a9] Running
	I0731 19:49:12.496784  139843 system_pods.go:89] "kube-apiserver-ha-235073" [c7da5168-cd07-4660-91a7-f25bf44db28e] Running
	I0731 19:49:12.496788  139843 system_pods.go:89] "kube-apiserver-ha-235073-m02" [bb498dc0-7bea-4f44-b6ea-0b66122d8205] Running
	I0731 19:49:12.496792  139843 system_pods.go:89] "kube-controller-manager-ha-235073" [1d7ad140-888f-4863-aa09-0651eae569a7] Running
	I0731 19:49:12.496796  139843 system_pods.go:89] "kube-controller-manager-ha-235073-m02" [7d1e23f4-1609-476f-b30e-1e18d291ca4c] Running
	I0731 19:49:12.496800  139843 system_pods.go:89] "kube-proxy-4g5ws" [681015ee-d7ba-460f-a593-0152df2b065d] Running
	I0731 19:49:12.496806  139843 system_pods.go:89] "kube-proxy-td8j2" [b836edfa-4df1-40e4-a58a-3f23afd5b78b] Running
	I0731 19:49:12.496812  139843 system_pods.go:89] "kube-scheduler-ha-235073" [597d51e9-b674-4b7f-b104-6e8808a5d593] Running
	I0731 19:49:12.496817  139843 system_pods.go:89] "kube-scheduler-ha-235073-m02" [84f686e7-4317-41b4-8064-621a7fa7ade8] Running
	I0731 19:49:12.496821  139843 system_pods.go:89] "kube-vip-ha-235073" [f28e113e-7c11-4a00-a8cb-fb5527042343] Running
	I0731 19:49:12.496824  139843 system_pods.go:89] "kube-vip-ha-235073-m02" [4f387765-627c-49e4-9fce-eae672099a6d] Running
	I0731 19:49:12.496828  139843 system_pods.go:89] "storage-provisioner" [9cd9bb70-badc-4b4b-a135-62644edac7dd] Running
	I0731 19:49:12.496834  139843 system_pods.go:126] duration metric: took 209.666593ms to wait for k8s-apps to be running ...
	I0731 19:49:12.496844  139843 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 19:49:12.496889  139843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:49:12.512189  139843 system_svc.go:56] duration metric: took 15.336404ms WaitForService to wait for kubelet
	I0731 19:49:12.512220  139843 kubeadm.go:582] duration metric: took 22.709226064s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 19:49:12.512246  139843 node_conditions.go:102] verifying NodePressure condition ...
	I0731 19:49:12.683605  139843 request.go:629] Waited for 171.261957ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes
	I0731 19:49:12.683673  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes
	I0731 19:49:12.683680  139843 round_trippers.go:469] Request Headers:
	I0731 19:49:12.683690  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:49:12.683700  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:49:12.688391  139843 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 19:49:12.689404  139843 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 19:49:12.689428  139843 node_conditions.go:123] node cpu capacity is 2
	I0731 19:49:12.689444  139843 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 19:49:12.689449  139843 node_conditions.go:123] node cpu capacity is 2
	I0731 19:49:12.689455  139843 node_conditions.go:105] duration metric: took 177.202999ms to run NodePressure ...
	I0731 19:49:12.689470  139843 start.go:241] waiting for startup goroutines ...
	I0731 19:49:12.689524  139843 start.go:255] writing updated cluster config ...
	I0731 19:49:12.691548  139843 out.go:177] 
	I0731 19:49:12.693025  139843 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:49:12.693123  139843 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/config.json ...
	I0731 19:49:12.694848  139843 out.go:177] * Starting "ha-235073-m03" control-plane node in "ha-235073" cluster
	I0731 19:49:12.696075  139843 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 19:49:12.696105  139843 cache.go:56] Caching tarball of preloaded images
	I0731 19:49:12.696223  139843 preload.go:172] Found /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 19:49:12.696239  139843 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 19:49:12.696324  139843 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/config.json ...
	I0731 19:49:12.696503  139843 start.go:360] acquireMachinesLock for ha-235073-m03: {Name:mk0ee20c9dba367bb5e62f6affdfe6f589095d2a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 19:49:12.696553  139843 start.go:364] duration metric: took 32.257µs to acquireMachinesLock for "ha-235073-m03"
	I0731 19:49:12.696571  139843 start.go:93] Provisioning new machine with config: &{Name:ha-235073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-235073 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 19:49:12.696707  139843 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0731 19:49:12.698277  139843 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 19:49:12.698378  139843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:49:12.698426  139843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:49:12.713698  139843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45639
	I0731 19:49:12.714190  139843 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:49:12.714624  139843 main.go:141] libmachine: Using API Version  1
	I0731 19:49:12.714644  139843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:49:12.715070  139843 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:49:12.715255  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetMachineName
	I0731 19:49:12.715546  139843 main.go:141] libmachine: (ha-235073-m03) Calling .DriverName
	I0731 19:49:12.715767  139843 start.go:159] libmachine.API.Create for "ha-235073" (driver="kvm2")
	I0731 19:49:12.715795  139843 client.go:168] LocalClient.Create starting
	I0731 19:49:12.715823  139843 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem
	I0731 19:49:12.715855  139843 main.go:141] libmachine: Decoding PEM data...
	I0731 19:49:12.715871  139843 main.go:141] libmachine: Parsing certificate...
	I0731 19:49:12.715923  139843 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem
	I0731 19:49:12.715943  139843 main.go:141] libmachine: Decoding PEM data...
	I0731 19:49:12.715953  139843 main.go:141] libmachine: Parsing certificate...
	I0731 19:49:12.715969  139843 main.go:141] libmachine: Running pre-create checks...
	I0731 19:49:12.715977  139843 main.go:141] libmachine: (ha-235073-m03) Calling .PreCreateCheck
	I0731 19:49:12.716157  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetConfigRaw
	I0731 19:49:12.716568  139843 main.go:141] libmachine: Creating machine...
	I0731 19:49:12.716581  139843 main.go:141] libmachine: (ha-235073-m03) Calling .Create
	I0731 19:49:12.716737  139843 main.go:141] libmachine: (ha-235073-m03) Creating KVM machine...
	I0731 19:49:12.717976  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found existing default KVM network
	I0731 19:49:12.718141  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found existing private KVM network mk-ha-235073
	I0731 19:49:12.718291  139843 main.go:141] libmachine: (ha-235073-m03) Setting up store path in /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03 ...
	I0731 19:49:12.718315  139843 main.go:141] libmachine: (ha-235073-m03) Building disk image from file:///home/jenkins/minikube-integration/19355-121704/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0731 19:49:12.718343  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:12.718271  140882 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 19:49:12.718429  139843 main.go:141] libmachine: (ha-235073-m03) Downloading /home/jenkins/minikube-integration/19355-121704/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19355-121704/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0731 19:49:12.963627  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:12.963488  140882 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/id_rsa...
	I0731 19:49:13.195137  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:13.194998  140882 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/ha-235073-m03.rawdisk...
	I0731 19:49:13.195171  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Writing magic tar header
	I0731 19:49:13.195182  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Writing SSH key tar header
	I0731 19:49:13.195192  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:13.195108  140882 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03 ...
	I0731 19:49:13.195256  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03
	I0731 19:49:13.195298  139843 main.go:141] libmachine: (ha-235073-m03) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03 (perms=drwx------)
	I0731 19:49:13.195311  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704/.minikube/machines
	I0731 19:49:13.195318  139843 main.go:141] libmachine: (ha-235073-m03) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704/.minikube/machines (perms=drwxr-xr-x)
	I0731 19:49:13.195329  139843 main.go:141] libmachine: (ha-235073-m03) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704/.minikube (perms=drwxr-xr-x)
	I0731 19:49:13.195337  139843 main.go:141] libmachine: (ha-235073-m03) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704 (perms=drwxrwxr-x)
	I0731 19:49:13.195352  139843 main.go:141] libmachine: (ha-235073-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 19:49:13.195386  139843 main.go:141] libmachine: (ha-235073-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 19:49:13.195393  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 19:49:13.195417  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704
	I0731 19:49:13.195423  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 19:49:13.195433  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Checking permissions on dir: /home/jenkins
	I0731 19:49:13.195444  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Checking permissions on dir: /home
	I0731 19:49:13.195453  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Skipping /home - not owner
	I0731 19:49:13.195488  139843 main.go:141] libmachine: (ha-235073-m03) Creating domain...
	I0731 19:49:13.196312  139843 main.go:141] libmachine: (ha-235073-m03) define libvirt domain using xml: 
	I0731 19:49:13.196333  139843 main.go:141] libmachine: (ha-235073-m03) <domain type='kvm'>
	I0731 19:49:13.196344  139843 main.go:141] libmachine: (ha-235073-m03)   <name>ha-235073-m03</name>
	I0731 19:49:13.196360  139843 main.go:141] libmachine: (ha-235073-m03)   <memory unit='MiB'>2200</memory>
	I0731 19:49:13.196370  139843 main.go:141] libmachine: (ha-235073-m03)   <vcpu>2</vcpu>
	I0731 19:49:13.196380  139843 main.go:141] libmachine: (ha-235073-m03)   <features>
	I0731 19:49:13.196390  139843 main.go:141] libmachine: (ha-235073-m03)     <acpi/>
	I0731 19:49:13.196403  139843 main.go:141] libmachine: (ha-235073-m03)     <apic/>
	I0731 19:49:13.196414  139843 main.go:141] libmachine: (ha-235073-m03)     <pae/>
	I0731 19:49:13.196424  139843 main.go:141] libmachine: (ha-235073-m03)     
	I0731 19:49:13.196433  139843 main.go:141] libmachine: (ha-235073-m03)   </features>
	I0731 19:49:13.196443  139843 main.go:141] libmachine: (ha-235073-m03)   <cpu mode='host-passthrough'>
	I0731 19:49:13.196451  139843 main.go:141] libmachine: (ha-235073-m03)   
	I0731 19:49:13.196461  139843 main.go:141] libmachine: (ha-235073-m03)   </cpu>
	I0731 19:49:13.196469  139843 main.go:141] libmachine: (ha-235073-m03)   <os>
	I0731 19:49:13.196479  139843 main.go:141] libmachine: (ha-235073-m03)     <type>hvm</type>
	I0731 19:49:13.196492  139843 main.go:141] libmachine: (ha-235073-m03)     <boot dev='cdrom'/>
	I0731 19:49:13.196506  139843 main.go:141] libmachine: (ha-235073-m03)     <boot dev='hd'/>
	I0731 19:49:13.196517  139843 main.go:141] libmachine: (ha-235073-m03)     <bootmenu enable='no'/>
	I0731 19:49:13.196527  139843 main.go:141] libmachine: (ha-235073-m03)   </os>
	I0731 19:49:13.196535  139843 main.go:141] libmachine: (ha-235073-m03)   <devices>
	I0731 19:49:13.196543  139843 main.go:141] libmachine: (ha-235073-m03)     <disk type='file' device='cdrom'>
	I0731 19:49:13.196555  139843 main.go:141] libmachine: (ha-235073-m03)       <source file='/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/boot2docker.iso'/>
	I0731 19:49:13.196567  139843 main.go:141] libmachine: (ha-235073-m03)       <target dev='hdc' bus='scsi'/>
	I0731 19:49:13.196583  139843 main.go:141] libmachine: (ha-235073-m03)       <readonly/>
	I0731 19:49:13.196593  139843 main.go:141] libmachine: (ha-235073-m03)     </disk>
	I0731 19:49:13.196609  139843 main.go:141] libmachine: (ha-235073-m03)     <disk type='file' device='disk'>
	I0731 19:49:13.196621  139843 main.go:141] libmachine: (ha-235073-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 19:49:13.196636  139843 main.go:141] libmachine: (ha-235073-m03)       <source file='/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/ha-235073-m03.rawdisk'/>
	I0731 19:49:13.196657  139843 main.go:141] libmachine: (ha-235073-m03)       <target dev='hda' bus='virtio'/>
	I0731 19:49:13.196669  139843 main.go:141] libmachine: (ha-235073-m03)     </disk>
	I0731 19:49:13.196679  139843 main.go:141] libmachine: (ha-235073-m03)     <interface type='network'>
	I0731 19:49:13.196691  139843 main.go:141] libmachine: (ha-235073-m03)       <source network='mk-ha-235073'/>
	I0731 19:49:13.196700  139843 main.go:141] libmachine: (ha-235073-m03)       <model type='virtio'/>
	I0731 19:49:13.196706  139843 main.go:141] libmachine: (ha-235073-m03)     </interface>
	I0731 19:49:13.196713  139843 main.go:141] libmachine: (ha-235073-m03)     <interface type='network'>
	I0731 19:49:13.196735  139843 main.go:141] libmachine: (ha-235073-m03)       <source network='default'/>
	I0731 19:49:13.196759  139843 main.go:141] libmachine: (ha-235073-m03)       <model type='virtio'/>
	I0731 19:49:13.196769  139843 main.go:141] libmachine: (ha-235073-m03)     </interface>
	I0731 19:49:13.196787  139843 main.go:141] libmachine: (ha-235073-m03)     <serial type='pty'>
	I0731 19:49:13.196793  139843 main.go:141] libmachine: (ha-235073-m03)       <target port='0'/>
	I0731 19:49:13.196798  139843 main.go:141] libmachine: (ha-235073-m03)     </serial>
	I0731 19:49:13.196806  139843 main.go:141] libmachine: (ha-235073-m03)     <console type='pty'>
	I0731 19:49:13.196816  139843 main.go:141] libmachine: (ha-235073-m03)       <target type='serial' port='0'/>
	I0731 19:49:13.196828  139843 main.go:141] libmachine: (ha-235073-m03)     </console>
	I0731 19:49:13.196838  139843 main.go:141] libmachine: (ha-235073-m03)     <rng model='virtio'>
	I0731 19:49:13.196848  139843 main.go:141] libmachine: (ha-235073-m03)       <backend model='random'>/dev/random</backend>
	I0731 19:49:13.196858  139843 main.go:141] libmachine: (ha-235073-m03)     </rng>
	I0731 19:49:13.196866  139843 main.go:141] libmachine: (ha-235073-m03)     
	I0731 19:49:13.196874  139843 main.go:141] libmachine: (ha-235073-m03)     
	I0731 19:49:13.196880  139843 main.go:141] libmachine: (ha-235073-m03)   </devices>
	I0731 19:49:13.196886  139843 main.go:141] libmachine: (ha-235073-m03) </domain>
	I0731 19:49:13.196896  139843 main.go:141] libmachine: (ha-235073-m03) 
	I0731 19:49:13.203712  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:f2:41:ab in network default
	I0731 19:49:13.204272  139843 main.go:141] libmachine: (ha-235073-m03) Ensuring networks are active...
	I0731 19:49:13.204294  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:13.204977  139843 main.go:141] libmachine: (ha-235073-m03) Ensuring network default is active
	I0731 19:49:13.205229  139843 main.go:141] libmachine: (ha-235073-m03) Ensuring network mk-ha-235073 is active
	I0731 19:49:13.205590  139843 main.go:141] libmachine: (ha-235073-m03) Getting domain xml...
	I0731 19:49:13.206439  139843 main.go:141] libmachine: (ha-235073-m03) Creating domain...
	I0731 19:49:14.417702  139843 main.go:141] libmachine: (ha-235073-m03) Waiting to get IP...
	I0731 19:49:14.418375  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:14.418792  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find current IP address of domain ha-235073-m03 in network mk-ha-235073
	I0731 19:49:14.418838  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:14.418768  140882 retry.go:31] will retry after 301.990056ms: waiting for machine to come up
	I0731 19:49:14.722399  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:14.722867  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find current IP address of domain ha-235073-m03 in network mk-ha-235073
	I0731 19:49:14.722894  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:14.722810  140882 retry.go:31] will retry after 380.1158ms: waiting for machine to come up
	I0731 19:49:15.104470  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:15.104900  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find current IP address of domain ha-235073-m03 in network mk-ha-235073
	I0731 19:49:15.104928  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:15.104857  140882 retry.go:31] will retry after 481.472336ms: waiting for machine to come up
	I0731 19:49:15.587436  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:15.587814  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find current IP address of domain ha-235073-m03 in network mk-ha-235073
	I0731 19:49:15.587844  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:15.587775  140882 retry.go:31] will retry after 446.282461ms: waiting for machine to come up
	I0731 19:49:16.035180  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:16.035583  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find current IP address of domain ha-235073-m03 in network mk-ha-235073
	I0731 19:49:16.035610  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:16.035535  140882 retry.go:31] will retry after 637.584414ms: waiting for machine to come up
	I0731 19:49:16.674897  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:16.675311  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find current IP address of domain ha-235073-m03 in network mk-ha-235073
	I0731 19:49:16.675336  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:16.675266  140882 retry.go:31] will retry after 740.193685ms: waiting for machine to come up
	I0731 19:49:17.417075  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:17.417538  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find current IP address of domain ha-235073-m03 in network mk-ha-235073
	I0731 19:49:17.417571  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:17.417475  140882 retry.go:31] will retry after 931.617013ms: waiting for machine to come up
	I0731 19:49:18.350335  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:18.350809  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find current IP address of domain ha-235073-m03 in network mk-ha-235073
	I0731 19:49:18.350835  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:18.350786  140882 retry.go:31] will retry after 1.145262324s: waiting for machine to come up
	I0731 19:49:19.498024  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:19.498539  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find current IP address of domain ha-235073-m03 in network mk-ha-235073
	I0731 19:49:19.498564  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:19.498490  140882 retry.go:31] will retry after 1.70182596s: waiting for machine to come up
	I0731 19:49:21.201440  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:21.201898  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find current IP address of domain ha-235073-m03 in network mk-ha-235073
	I0731 19:49:21.201926  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:21.201850  140882 retry.go:31] will retry after 2.005317649s: waiting for machine to come up
	I0731 19:49:23.209062  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:23.209764  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find current IP address of domain ha-235073-m03 in network mk-ha-235073
	I0731 19:49:23.209812  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:23.209708  140882 retry.go:31] will retry after 2.130232319s: waiting for machine to come up
	I0731 19:49:25.342820  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:25.343281  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find current IP address of domain ha-235073-m03 in network mk-ha-235073
	I0731 19:49:25.343310  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:25.343241  140882 retry.go:31] will retry after 2.512740406s: waiting for machine to come up
	I0731 19:49:27.857598  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:27.858125  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find current IP address of domain ha-235073-m03 in network mk-ha-235073
	I0731 19:49:27.858156  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:27.858085  140882 retry.go:31] will retry after 4.435303382s: waiting for machine to come up
	I0731 19:49:32.298335  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:32.298703  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find current IP address of domain ha-235073-m03 in network mk-ha-235073
	I0731 19:49:32.298730  139843 main.go:141] libmachine: (ha-235073-m03) DBG | I0731 19:49:32.298654  140882 retry.go:31] will retry after 4.668024043s: waiting for machine to come up
	I0731 19:49:36.970540  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:36.971105  139843 main.go:141] libmachine: (ha-235073-m03) Found IP for machine: 192.168.39.136
	I0731 19:49:36.971129  139843 main.go:141] libmachine: (ha-235073-m03) Reserving static IP address...
	I0731 19:49:36.971143  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has current primary IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:36.971532  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find host DHCP lease matching {name: "ha-235073-m03", mac: "52:54:00:6d:fb:8e", ip: "192.168.39.136"} in network mk-ha-235073
	I0731 19:49:37.046651  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Getting to WaitForSSH function...
	I0731 19:49:37.046684  139843 main.go:141] libmachine: (ha-235073-m03) Reserved static IP address: 192.168.39.136
	I0731 19:49:37.046697  139843 main.go:141] libmachine: (ha-235073-m03) Waiting for SSH to be available...
	I0731 19:49:37.049355  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:37.049693  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073
	I0731 19:49:37.049734  139843 main.go:141] libmachine: (ha-235073-m03) DBG | unable to find defined IP address of network mk-ha-235073 interface with MAC address 52:54:00:6d:fb:8e
	I0731 19:49:37.049874  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Using SSH client type: external
	I0731 19:49:37.049900  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/id_rsa (-rw-------)
	I0731 19:49:37.049953  139843 main.go:141] libmachine: (ha-235073-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 19:49:37.049983  139843 main.go:141] libmachine: (ha-235073-m03) DBG | About to run SSH command:
	I0731 19:49:37.050000  139843 main.go:141] libmachine: (ha-235073-m03) DBG | exit 0
	I0731 19:49:37.053802  139843 main.go:141] libmachine: (ha-235073-m03) DBG | SSH cmd err, output: exit status 255: 
	I0731 19:49:37.053828  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0731 19:49:37.053839  139843 main.go:141] libmachine: (ha-235073-m03) DBG | command : exit 0
	I0731 19:49:37.053844  139843 main.go:141] libmachine: (ha-235073-m03) DBG | err     : exit status 255
	I0731 19:49:37.053853  139843 main.go:141] libmachine: (ha-235073-m03) DBG | output  : 
	I0731 19:49:40.054762  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Getting to WaitForSSH function...
	I0731 19:49:40.057459  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:40.057925  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:40.057962  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:40.058043  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Using SSH client type: external
	I0731 19:49:40.058082  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/id_rsa (-rw-------)
	I0731 19:49:40.058113  139843 main.go:141] libmachine: (ha-235073-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 19:49:40.058127  139843 main.go:141] libmachine: (ha-235073-m03) DBG | About to run SSH command:
	I0731 19:49:40.058142  139843 main.go:141] libmachine: (ha-235073-m03) DBG | exit 0
	I0731 19:49:40.189626  139843 main.go:141] libmachine: (ha-235073-m03) DBG | SSH cmd err, output: <nil>: 
	I0731 19:49:40.189871  139843 main.go:141] libmachine: (ha-235073-m03) KVM machine creation complete!
	I0731 19:49:40.190213  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetConfigRaw
	I0731 19:49:40.190809  139843 main.go:141] libmachine: (ha-235073-m03) Calling .DriverName
	I0731 19:49:40.191043  139843 main.go:141] libmachine: (ha-235073-m03) Calling .DriverName
	I0731 19:49:40.191214  139843 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 19:49:40.191230  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetState
	I0731 19:49:40.192502  139843 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 19:49:40.192516  139843 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 19:49:40.192522  139843 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 19:49:40.192528  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHHostname
	I0731 19:49:40.194981  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:40.195297  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:40.195323  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:40.195496  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHPort
	I0731 19:49:40.195691  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:49:40.195894  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:49:40.196034  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHUsername
	I0731 19:49:40.196246  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:49:40.196467  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0731 19:49:40.196478  139843 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 19:49:40.312666  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 19:49:40.312697  139843 main.go:141] libmachine: Detecting the provisioner...
	I0731 19:49:40.312706  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHHostname
	I0731 19:49:40.315500  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:40.315839  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:40.315867  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:40.315998  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHPort
	I0731 19:49:40.316188  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:49:40.316352  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:49:40.316503  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHUsername
	I0731 19:49:40.316683  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:49:40.316843  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0731 19:49:40.316854  139843 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 19:49:40.430110  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 19:49:40.430171  139843 main.go:141] libmachine: found compatible host: buildroot
	I0731 19:49:40.430179  139843 main.go:141] libmachine: Provisioning with buildroot...
	I0731 19:49:40.430187  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetMachineName
	I0731 19:49:40.430469  139843 buildroot.go:166] provisioning hostname "ha-235073-m03"
	I0731 19:49:40.430491  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetMachineName
	I0731 19:49:40.430689  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHHostname
	I0731 19:49:40.433312  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:40.433683  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:40.433703  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:40.433856  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHPort
	I0731 19:49:40.434054  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:49:40.434203  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:49:40.434329  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHUsername
	I0731 19:49:40.434530  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:49:40.434688  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0731 19:49:40.434700  139843 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-235073-m03 && echo "ha-235073-m03" | sudo tee /etc/hostname
	I0731 19:49:40.563706  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-235073-m03
	
	I0731 19:49:40.563740  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHHostname
	I0731 19:49:40.566368  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:40.566729  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:40.566757  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:40.566911  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHPort
	I0731 19:49:40.567109  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:49:40.567302  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:49:40.567507  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHUsername
	I0731 19:49:40.567664  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:49:40.567823  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0731 19:49:40.567839  139843 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-235073-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-235073-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-235073-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 19:49:40.691258  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 19:49:40.691293  139843 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 19:49:40.691314  139843 buildroot.go:174] setting up certificates
	I0731 19:49:40.691327  139843 provision.go:84] configureAuth start
	I0731 19:49:40.691340  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetMachineName
	I0731 19:49:40.691652  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetIP
	I0731 19:49:40.694219  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:40.694696  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:40.694719  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:40.694934  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHHostname
	I0731 19:49:40.696956  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:40.697357  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:40.697387  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:40.697517  139843 provision.go:143] copyHostCerts
	I0731 19:49:40.697556  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 19:49:40.697589  139843 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 19:49:40.697611  139843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 19:49:40.697683  139843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 19:49:40.697758  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 19:49:40.697776  139843 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 19:49:40.697783  139843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 19:49:40.697806  139843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 19:49:40.697848  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 19:49:40.697866  139843 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 19:49:40.697872  139843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 19:49:40.697894  139843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 19:49:40.697942  139843 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.ha-235073-m03 san=[127.0.0.1 192.168.39.136 ha-235073-m03 localhost minikube]
	I0731 19:49:40.934287  139843 provision.go:177] copyRemoteCerts
	I0731 19:49:40.934344  139843 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 19:49:40.934368  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHHostname
	I0731 19:49:40.937136  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:40.937484  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:40.937507  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:40.937746  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHPort
	I0731 19:49:40.937932  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:49:40.938104  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHUsername
	I0731 19:49:40.938260  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/id_rsa Username:docker}
	I0731 19:49:41.023742  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 19:49:41.023817  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 19:49:41.051389  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 19:49:41.051469  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 19:49:41.076706  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 19:49:41.076784  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 19:49:41.100557  139843 provision.go:87] duration metric: took 409.214806ms to configureAuth
	I0731 19:49:41.100590  139843 buildroot.go:189] setting minikube options for container-runtime
	I0731 19:49:41.100848  139843 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:49:41.100949  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHHostname
	I0731 19:49:41.103740  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:41.104105  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:41.104131  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:41.104338  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHPort
	I0731 19:49:41.104544  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:49:41.104728  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:49:41.104886  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHUsername
	I0731 19:49:41.105085  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:49:41.105301  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0731 19:49:41.105318  139843 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 19:49:41.394123  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 19:49:41.394157  139843 main.go:141] libmachine: Checking connection to Docker...
	I0731 19:49:41.394167  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetURL
	I0731 19:49:41.395400  139843 main.go:141] libmachine: (ha-235073-m03) DBG | Using libvirt version 6000000
	I0731 19:49:41.397436  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:41.397766  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:41.397793  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:41.397919  139843 main.go:141] libmachine: Docker is up and running!
	I0731 19:49:41.397934  139843 main.go:141] libmachine: Reticulating splines...
	I0731 19:49:41.397942  139843 client.go:171] duration metric: took 28.682138125s to LocalClient.Create
	I0731 19:49:41.397970  139843 start.go:167] duration metric: took 28.682204129s to libmachine.API.Create "ha-235073"
	I0731 19:49:41.397982  139843 start.go:293] postStartSetup for "ha-235073-m03" (driver="kvm2")
	I0731 19:49:41.397997  139843 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 19:49:41.398018  139843 main.go:141] libmachine: (ha-235073-m03) Calling .DriverName
	I0731 19:49:41.398284  139843 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 19:49:41.398307  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHHostname
	I0731 19:49:41.400510  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:41.400846  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:41.400870  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:41.401032  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHPort
	I0731 19:49:41.401239  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:49:41.401457  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHUsername
	I0731 19:49:41.401624  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/id_rsa Username:docker}
	I0731 19:49:41.487941  139843 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 19:49:41.492747  139843 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 19:49:41.492774  139843 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 19:49:41.492831  139843 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 19:49:41.492907  139843 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 19:49:41.492921  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> /etc/ssl/certs/1288912.pem
	I0731 19:49:41.493032  139843 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 19:49:41.502859  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 19:49:41.527878  139843 start.go:296] duration metric: took 129.876972ms for postStartSetup
	I0731 19:49:41.527936  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetConfigRaw
	I0731 19:49:41.528505  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetIP
	I0731 19:49:41.531265  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:41.531659  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:41.531699  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:41.531979  139843 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/config.json ...
	I0731 19:49:41.532211  139843 start.go:128] duration metric: took 28.83549273s to createHost
	I0731 19:49:41.532235  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHHostname
	I0731 19:49:41.534681  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:41.535082  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:41.535106  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:41.535285  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHPort
	I0731 19:49:41.535487  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:49:41.535637  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:49:41.535836  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHUsername
	I0731 19:49:41.536031  139843 main.go:141] libmachine: Using SSH client type: native
	I0731 19:49:41.536235  139843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0731 19:49:41.536247  139843 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 19:49:41.649889  139843 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722455381.621855104
	
	I0731 19:49:41.649915  139843 fix.go:216] guest clock: 1722455381.621855104
	I0731 19:49:41.649924  139843 fix.go:229] Guest: 2024-07-31 19:49:41.621855104 +0000 UTC Remote: 2024-07-31 19:49:41.532223138 +0000 UTC m=+223.341153733 (delta=89.631966ms)
	I0731 19:49:41.649947  139843 fix.go:200] guest clock delta is within tolerance: 89.631966ms
	I0731 19:49:41.649954  139843 start.go:83] releasing machines lock for "ha-235073-m03", held for 28.95339132s
	I0731 19:49:41.649980  139843 main.go:141] libmachine: (ha-235073-m03) Calling .DriverName
	I0731 19:49:41.650238  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetIP
	I0731 19:49:41.653246  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:41.654123  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:41.654174  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:41.656370  139843 out.go:177] * Found network options:
	I0731 19:49:41.658139  139843 out.go:177]   - NO_PROXY=192.168.39.146,192.168.39.102
	W0731 19:49:41.659394  139843 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 19:49:41.659424  139843 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 19:49:41.659443  139843 main.go:141] libmachine: (ha-235073-m03) Calling .DriverName
	I0731 19:49:41.659994  139843 main.go:141] libmachine: (ha-235073-m03) Calling .DriverName
	I0731 19:49:41.660184  139843 main.go:141] libmachine: (ha-235073-m03) Calling .DriverName
	I0731 19:49:41.660288  139843 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 19:49:41.660329  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHHostname
	W0731 19:49:41.660399  139843 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 19:49:41.660435  139843 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 19:49:41.660503  139843 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 19:49:41.660526  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHHostname
	I0731 19:49:41.662875  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:41.663195  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:41.663354  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:41.663490  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:41.663531  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:41.663570  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHPort
	I0731 19:49:41.663576  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:41.663775  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:49:41.663784  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHPort
	I0731 19:49:41.663941  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:49:41.663957  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHUsername
	I0731 19:49:41.664140  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHUsername
	I0731 19:49:41.664145  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/id_rsa Username:docker}
	I0731 19:49:41.664289  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/id_rsa Username:docker}
	I0731 19:49:41.902689  139843 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 19:49:41.908774  139843 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 19:49:41.908851  139843 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 19:49:41.924353  139843 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 19:49:41.924374  139843 start.go:495] detecting cgroup driver to use...
	I0731 19:49:41.924438  139843 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 19:49:41.941590  139843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 19:49:41.956027  139843 docker.go:217] disabling cri-docker service (if available) ...
	I0731 19:49:41.956088  139843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 19:49:41.970176  139843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 19:49:41.983233  139843 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 19:49:42.102513  139843 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 19:49:42.256596  139843 docker.go:233] disabling docker service ...
	I0731 19:49:42.256663  139843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 19:49:42.271847  139843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 19:49:42.285469  139843 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 19:49:42.428666  139843 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 19:49:42.556537  139843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 19:49:42.571888  139843 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 19:49:42.590235  139843 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 19:49:42.590313  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:49:42.600932  139843 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 19:49:42.601004  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:49:42.613682  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:49:42.624498  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:49:42.634794  139843 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 19:49:42.645520  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:49:42.656329  139843 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:49:42.674523  139843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:49:42.684828  139843 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 19:49:42.695013  139843 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 19:49:42.695074  139843 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 19:49:42.709252  139843 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 19:49:42.719340  139843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:49:42.843340  139843 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 19:49:42.992388  139843 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 19:49:42.992468  139843 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 19:49:42.997721  139843 start.go:563] Will wait 60s for crictl version
	I0731 19:49:42.997774  139843 ssh_runner.go:195] Run: which crictl
	I0731 19:49:43.001818  139843 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 19:49:43.046559  139843 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 19:49:43.046674  139843 ssh_runner.go:195] Run: crio --version
	I0731 19:49:43.076903  139843 ssh_runner.go:195] Run: crio --version
	I0731 19:49:43.108474  139843 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 19:49:43.109991  139843 out.go:177]   - env NO_PROXY=192.168.39.146
	I0731 19:49:43.111425  139843 out.go:177]   - env NO_PROXY=192.168.39.146,192.168.39.102
	I0731 19:49:43.112762  139843 main.go:141] libmachine: (ha-235073-m03) Calling .GetIP
	I0731 19:49:43.115493  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:43.115896  139843 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:49:43.115917  139843 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:49:43.116125  139843 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 19:49:43.120571  139843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 19:49:43.133395  139843 mustload.go:65] Loading cluster: ha-235073
	I0731 19:49:43.133659  139843 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:49:43.134004  139843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:49:43.134055  139843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:49:43.148767  139843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35831
	I0731 19:49:43.149177  139843 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:49:43.149677  139843 main.go:141] libmachine: Using API Version  1
	I0731 19:49:43.149700  139843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:49:43.150026  139843 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:49:43.150262  139843 main.go:141] libmachine: (ha-235073) Calling .GetState
	I0731 19:49:43.151953  139843 host.go:66] Checking if "ha-235073" exists ...
	I0731 19:49:43.152410  139843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:49:43.152446  139843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:49:43.167592  139843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41113
	I0731 19:49:43.167997  139843 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:49:43.168492  139843 main.go:141] libmachine: Using API Version  1
	I0731 19:49:43.168514  139843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:49:43.168834  139843 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:49:43.169047  139843 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:49:43.169211  139843 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073 for IP: 192.168.39.136
	I0731 19:49:43.169232  139843 certs.go:194] generating shared ca certs ...
	I0731 19:49:43.169248  139843 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:49:43.169388  139843 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 19:49:43.169433  139843 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 19:49:43.169442  139843 certs.go:256] generating profile certs ...
	I0731 19:49:43.169508  139843 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/client.key
	I0731 19:49:43.169533  139843 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key.5f4bd5e8
	I0731 19:49:43.169548  139843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt.5f4bd5e8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.146 192.168.39.102 192.168.39.136 192.168.39.254]
	I0731 19:49:43.325937  139843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt.5f4bd5e8 ...
	I0731 19:49:43.325971  139843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt.5f4bd5e8: {Name:mk7c32c651a738beae3b332c901ba02ca2f38208 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:49:43.326171  139843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key.5f4bd5e8 ...
	I0731 19:49:43.326187  139843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key.5f4bd5e8: {Name:mk4c7eb40d841fadf32775c3ad6100bc7dcc5cbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:49:43.326289  139843 certs.go:381] copying /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt.5f4bd5e8 -> /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt
	I0731 19:49:43.326420  139843 certs.go:385] copying /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key.5f4bd5e8 -> /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key
	I0731 19:49:43.326542  139843 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.key
	I0731 19:49:43.326560  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 19:49:43.326572  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 19:49:43.326584  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 19:49:43.326594  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 19:49:43.326608  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 19:49:43.326619  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 19:49:43.326631  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 19:49:43.326642  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 19:49:43.326690  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 19:49:43.326718  139843 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 19:49:43.326726  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 19:49:43.326746  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 19:49:43.326767  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 19:49:43.326787  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 19:49:43.326822  139843 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 19:49:43.326847  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:49:43.326860  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem -> /usr/share/ca-certificates/128891.pem
	I0731 19:49:43.326872  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> /usr/share/ca-certificates/1288912.pem
	I0731 19:49:43.326904  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:49:43.330104  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:49:43.330628  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:49:43.330653  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:49:43.330929  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:49:43.331122  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:49:43.331321  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:49:43.331454  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:49:43.401675  139843 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0731 19:49:43.407029  139843 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0731 19:49:43.418402  139843 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0731 19:49:43.422817  139843 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0731 19:49:43.434800  139843 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0731 19:49:43.438848  139843 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0731 19:49:43.450534  139843 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0731 19:49:43.454515  139843 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0731 19:49:43.464745  139843 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0731 19:49:43.468720  139843 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0731 19:49:43.479197  139843 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0731 19:49:43.483801  139843 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0731 19:49:43.494510  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 19:49:43.522237  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 19:49:43.547593  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 19:49:43.570838  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 19:49:43.593725  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0731 19:49:43.618028  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 19:49:43.644478  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 19:49:43.670965  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 19:49:43.694595  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 19:49:43.719254  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 19:49:43.743708  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 19:49:43.767864  139843 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0731 19:49:43.785116  139843 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0731 19:49:43.802067  139843 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0731 19:49:43.818557  139843 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0731 19:49:43.834520  139843 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0731 19:49:43.850999  139843 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0731 19:49:43.868592  139843 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0731 19:49:43.890832  139843 ssh_runner.go:195] Run: openssl version
	I0731 19:49:43.896712  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 19:49:43.908340  139843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:49:43.913343  139843 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:49:43.913397  139843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:49:43.919326  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 19:49:43.930633  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 19:49:43.941544  139843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 19:49:43.946289  139843 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 19:49:43.946340  139843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 19:49:43.952209  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 19:49:43.964522  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 19:49:43.976763  139843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 19:49:43.981404  139843 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 19:49:43.981463  139843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 19:49:43.987499  139843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 19:49:43.999552  139843 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 19:49:44.003896  139843 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 19:49:44.003953  139843 kubeadm.go:934] updating node {m03 192.168.39.136 8443 v1.30.3 crio true true} ...
	I0731 19:49:44.004047  139843 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-235073-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-235073 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 19:49:44.004075  139843 kube-vip.go:115] generating kube-vip config ...
	I0731 19:49:44.004113  139843 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 19:49:44.023252  139843 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 19:49:44.023326  139843 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 19:49:44.023390  139843 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 19:49:44.034652  139843 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0731 19:49:44.034696  139843 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0731 19:49:44.045552  139843 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0731 19:49:44.045578  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 19:49:44.045587  139843 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0731 19:49:44.045605  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 19:49:44.045655  139843 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 19:49:44.045674  139843 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 19:49:44.045674  139843 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0731 19:49:44.045732  139843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:49:44.056803  139843 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0731 19:49:44.056830  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0731 19:49:44.063719  139843 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0731 19:49:44.063727  139843 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 19:49:44.063743  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0731 19:49:44.063829  139843 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 19:49:44.132861  139843 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0731 19:49:44.132901  139843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0731 19:49:44.924241  139843 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0731 19:49:44.933907  139843 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0731 19:49:44.951614  139843 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 19:49:44.968733  139843 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0731 19:49:44.986968  139843 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 19:49:44.991180  139843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 19:49:45.004475  139843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:49:45.134748  139843 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 19:49:45.153563  139843 host.go:66] Checking if "ha-235073" exists ...
	I0731 19:49:45.154025  139843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:49:45.154083  139843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:49:45.170990  139843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38713
	I0731 19:49:45.171459  139843 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:49:45.171997  139843 main.go:141] libmachine: Using API Version  1
	I0731 19:49:45.172022  139843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:49:45.172400  139843 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:49:45.172613  139843 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:49:45.172798  139843 start.go:317] joinCluster: &{Name:ha-235073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-235073 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.136 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:49:45.172947  139843 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0731 19:49:45.172982  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:49:45.176138  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:49:45.176649  139843 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:49:45.176678  139843 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:49:45.176836  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:49:45.177064  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:49:45.177229  139843 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:49:45.177368  139843 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:49:45.332057  139843 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.136 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 19:49:45.332111  139843 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b3yv26.dt4fe9zeda3apkfd --discovery-token-ca-cert-hash sha256:227a67c5218b331dd5f7132ea28d97c9a8049ca33dc9adbb2077964e102f3559 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-235073-m03 --control-plane --apiserver-advertise-address=192.168.39.136 --apiserver-bind-port=8443"
	I0731 19:50:07.954577  139843 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b3yv26.dt4fe9zeda3apkfd --discovery-token-ca-cert-hash sha256:227a67c5218b331dd5f7132ea28d97c9a8049ca33dc9adbb2077964e102f3559 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-235073-m03 --control-plane --apiserver-advertise-address=192.168.39.136 --apiserver-bind-port=8443": (22.622434486s)
	I0731 19:50:07.954620  139843 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0731 19:50:08.547853  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-235073-m03 minikube.k8s.io/updated_at=2024_07_31T19_50_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825 minikube.k8s.io/name=ha-235073 minikube.k8s.io/primary=false
	I0731 19:50:08.665157  139843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-235073-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0731 19:50:08.801101  139843 start.go:319] duration metric: took 23.628296732s to joinCluster
	I0731 19:50:08.801196  139843 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.136 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 19:50:08.801549  139843 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:50:08.802829  139843 out.go:177] * Verifying Kubernetes components...
	I0731 19:50:08.804572  139843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:50:09.129690  139843 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 19:50:09.174675  139843 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 19:50:09.175027  139843 kapi.go:59] client config for ha-235073: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/client.crt", KeyFile:"/home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/client.key", CAFile:"/home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0731 19:50:09.175126  139843 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.146:8443
	I0731 19:50:09.175432  139843 node_ready.go:35] waiting up to 6m0s for node "ha-235073-m03" to be "Ready" ...
	I0731 19:50:09.175577  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:09.175649  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:09.175665  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:09.175671  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:09.182609  139843 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 19:50:09.676526  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:09.676547  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:09.676556  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:09.676561  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:09.679851  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:10.175738  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:10.175768  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:10.175781  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:10.175787  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:10.179417  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:10.676526  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:10.676553  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:10.676567  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:10.676576  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:10.679880  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:11.175835  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:11.175858  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:11.175867  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:11.175871  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:11.179316  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:11.180521  139843 node_ready.go:53] node "ha-235073-m03" has status "Ready":"False"
	I0731 19:50:11.675834  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:11.675859  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:11.675872  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:11.675880  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:11.679567  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:12.176140  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:12.176172  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:12.176183  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:12.176187  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:12.179400  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:12.676399  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:12.676419  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:12.676428  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:12.676432  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:12.680072  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:13.175738  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:13.175758  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:13.175767  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:13.175772  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:13.178758  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:50:13.676409  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:13.676432  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:13.676443  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:13.676449  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:13.679978  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:13.680816  139843 node_ready.go:53] node "ha-235073-m03" has status "Ready":"False"
	I0731 19:50:14.176272  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:14.176301  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:14.176313  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:14.176320  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:14.179532  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:14.676106  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:14.676134  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:14.676147  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:14.676154  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:14.679525  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:15.176711  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:15.176740  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:15.176750  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:15.176756  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:15.180175  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:15.676603  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:15.676633  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:15.676645  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:15.676653  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:15.680746  139843 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 19:50:15.681392  139843 node_ready.go:53] node "ha-235073-m03" has status "Ready":"False"
	I0731 19:50:16.176393  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:16.176419  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:16.176429  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:16.176434  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:16.179564  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:16.675683  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:16.675708  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:16.675720  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:16.675725  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:16.679525  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:17.176218  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:17.176241  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:17.176252  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:17.176257  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:17.179403  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:17.676258  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:17.676279  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:17.676286  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:17.676291  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:17.680414  139843 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 19:50:17.681459  139843 node_ready.go:53] node "ha-235073-m03" has status "Ready":"False"
	I0731 19:50:18.175687  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:18.175705  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:18.175713  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:18.175716  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:18.179220  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:18.676264  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:18.676289  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:18.676300  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:18.676306  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:18.684784  139843 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0731 19:50:19.175730  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:19.175754  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:19.175763  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:19.175767  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:19.178808  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:19.675818  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:19.675840  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:19.675849  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:19.675853  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:19.678963  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:20.176029  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:20.176053  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:20.176065  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:20.176070  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:20.179637  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:20.180334  139843 node_ready.go:53] node "ha-235073-m03" has status "Ready":"False"
	I0731 19:50:20.676676  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:20.676698  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:20.676706  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:20.676712  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:20.679931  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:21.175886  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:21.175908  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:21.175917  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:21.175922  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:21.179155  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:21.676122  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:21.676146  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:21.676154  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:21.676160  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:21.679228  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:22.176509  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:22.176531  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:22.176539  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:22.176542  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:22.180204  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:22.181211  139843 node_ready.go:53] node "ha-235073-m03" has status "Ready":"False"
	I0731 19:50:22.676458  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:22.676483  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:22.676494  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:22.676502  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:22.679956  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:23.176171  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:23.176193  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:23.176204  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:23.176208  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:23.179717  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:23.676251  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:23.676274  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:23.676283  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:23.676287  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:23.680051  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:24.175976  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:24.175998  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:24.176007  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:24.176010  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:24.179354  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:24.676324  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:24.676353  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:24.676365  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:24.676372  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:24.679868  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:24.680464  139843 node_ready.go:53] node "ha-235073-m03" has status "Ready":"False"
	I0731 19:50:25.175744  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:25.175767  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:25.175777  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:25.175780  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:25.178789  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:50:25.675727  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:25.675751  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:25.675760  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:25.675763  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:25.679517  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:26.176630  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:26.176652  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:26.176662  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:26.176667  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:26.179563  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:50:26.676595  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:26.676618  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:26.676626  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:26.676630  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:26.679987  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:26.680562  139843 node_ready.go:53] node "ha-235073-m03" has status "Ready":"False"
	I0731 19:50:27.176291  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:27.176312  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:27.176321  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:27.176326  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:27.179392  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:27.676328  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:27.676355  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:27.676369  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:27.676373  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:27.680541  139843 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 19:50:28.176110  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:28.176136  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:28.176148  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:28.176155  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:28.183105  139843 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 19:50:28.675751  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:28.675776  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:28.675786  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:28.675793  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:28.679910  139843 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 19:50:28.680710  139843 node_ready.go:49] node "ha-235073-m03" has status "Ready":"True"
	I0731 19:50:28.680731  139843 node_ready.go:38] duration metric: took 19.505274208s for node "ha-235073-m03" to be "Ready" ...
	I0731 19:50:28.680741  139843 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 19:50:28.680804  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods
	I0731 19:50:28.680814  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:28.680821  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:28.680824  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:28.688302  139843 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 19:50:28.694640  139843 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-d2w7q" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:28.694732  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-d2w7q
	I0731 19:50:28.694741  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:28.694748  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:28.694754  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:28.697603  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:50:28.698320  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:50:28.698336  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:28.698346  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:28.698351  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:28.700994  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:50:28.701426  139843 pod_ready.go:92] pod "coredns-7db6d8ff4d-d2w7q" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:28.701451  139843 pod_ready.go:81] duration metric: took 6.786511ms for pod "coredns-7db6d8ff4d-d2w7q" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:28.701462  139843 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-f7dzt" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:28.701522  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-f7dzt
	I0731 19:50:28.701534  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:28.701544  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:28.701554  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:28.704325  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:50:28.704893  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:50:28.704908  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:28.704916  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:28.704921  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:28.707262  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:50:28.707957  139843 pod_ready.go:92] pod "coredns-7db6d8ff4d-f7dzt" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:28.707972  139843 pod_ready.go:81] duration metric: took 6.504881ms for pod "coredns-7db6d8ff4d-f7dzt" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:28.707980  139843 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-235073" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:28.708026  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/etcd-ha-235073
	I0731 19:50:28.708033  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:28.708040  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:28.708044  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:28.710485  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:50:28.710939  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:50:28.710950  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:28.710957  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:28.710962  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:28.713442  139843 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 19:50:28.714244  139843 pod_ready.go:92] pod "etcd-ha-235073" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:28.714261  139843 pod_ready.go:81] duration metric: took 6.276497ms for pod "etcd-ha-235073" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:28.714269  139843 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-235073-m02" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:28.714315  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/etcd-ha-235073-m02
	I0731 19:50:28.714322  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:28.714329  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:28.714334  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:28.717791  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:28.719025  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:50:28.719044  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:28.719054  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:28.719059  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:28.722461  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:28.723383  139843 pod_ready.go:92] pod "etcd-ha-235073-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:28.723402  139843 pod_ready.go:81] duration metric: took 9.124917ms for pod "etcd-ha-235073-m02" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:28.723411  139843 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-235073-m03" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:28.876794  139843 request.go:629] Waited for 153.326643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/etcd-ha-235073-m03
	I0731 19:50:28.876871  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/etcd-ha-235073-m03
	I0731 19:50:28.876877  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:28.876888  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:28.876893  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:28.880077  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:29.076081  139843 request.go:629] Waited for 195.375862ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:29.076137  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:29.076153  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:29.076179  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:29.076189  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:29.079347  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:29.080187  139843 pod_ready.go:92] pod "etcd-ha-235073-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:29.080205  139843 pod_ready.go:81] duration metric: took 356.788799ms for pod "etcd-ha-235073-m03" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:29.080220  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-235073" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:29.276539  139843 request.go:629] Waited for 196.232295ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-235073
	I0731 19:50:29.276635  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-235073
	I0731 19:50:29.276648  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:29.276658  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:29.276663  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:29.279859  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:29.475793  139843 request.go:629] Waited for 195.294387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:50:29.475866  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:50:29.475874  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:29.475886  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:29.475896  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:29.479417  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:29.480298  139843 pod_ready.go:92] pod "kube-apiserver-ha-235073" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:29.480322  139843 pod_ready.go:81] duration metric: took 400.093165ms for pod "kube-apiserver-ha-235073" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:29.480335  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-235073-m02" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:29.676393  139843 request.go:629] Waited for 195.981834ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-235073-m02
	I0731 19:50:29.676467  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-235073-m02
	I0731 19:50:29.676472  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:29.676516  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:29.676523  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:29.679960  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:29.876116  139843 request.go:629] Waited for 195.341402ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:50:29.876199  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:50:29.876207  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:29.876217  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:29.876227  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:29.879280  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:29.879879  139843 pod_ready.go:92] pod "kube-apiserver-ha-235073-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:29.879899  139843 pod_ready.go:81] duration metric: took 399.557128ms for pod "kube-apiserver-ha-235073-m02" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:29.879908  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-235073-m03" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:30.075887  139843 request.go:629] Waited for 195.873775ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-235073-m03
	I0731 19:50:30.075967  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-235073-m03
	I0731 19:50:30.075976  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:30.075986  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:30.075994  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:30.079799  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:30.276753  139843 request.go:629] Waited for 196.234848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:30.276839  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:30.276847  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:30.276854  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:30.276862  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:30.280427  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:30.281150  139843 pod_ready.go:92] pod "kube-apiserver-ha-235073-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:30.281165  139843 pod_ready.go:81] duration metric: took 401.250556ms for pod "kube-apiserver-ha-235073-m03" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:30.281174  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-235073" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:30.476295  139843 request.go:629] Waited for 195.027881ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-235073
	I0731 19:50:30.476359  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-235073
	I0731 19:50:30.476364  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:30.476375  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:30.476379  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:30.479677  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:30.675822  139843 request.go:629] Waited for 195.286553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:50:30.675913  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:50:30.675925  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:30.675936  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:30.675942  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:30.679354  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:30.680187  139843 pod_ready.go:92] pod "kube-controller-manager-ha-235073" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:30.680205  139843 pod_ready.go:81] duration metric: took 399.024732ms for pod "kube-controller-manager-ha-235073" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:30.680214  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-235073-m02" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:30.876277  139843 request.go:629] Waited for 195.979424ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-235073-m02
	I0731 19:50:30.876338  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-235073-m02
	I0731 19:50:30.876343  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:30.876351  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:30.876356  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:30.880038  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:31.076782  139843 request.go:629] Waited for 196.129605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:50:31.076867  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:50:31.076877  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:31.076885  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:31.076890  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:31.080255  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:31.081130  139843 pod_ready.go:92] pod "kube-controller-manager-ha-235073-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:31.081152  139843 pod_ready.go:81] duration metric: took 400.931545ms for pod "kube-controller-manager-ha-235073-m02" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:31.081163  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-235073-m03" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:31.276307  139843 request.go:629] Waited for 195.065568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-235073-m03
	I0731 19:50:31.276381  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-235073-m03
	I0731 19:50:31.276388  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:31.276396  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:31.276402  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:31.280048  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:31.476354  139843 request.go:629] Waited for 195.34089ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:31.476420  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:31.476424  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:31.476432  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:31.476436  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:31.480461  139843 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 19:50:31.481095  139843 pod_ready.go:92] pod "kube-controller-manager-ha-235073-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:31.481121  139843 pod_ready.go:81] duration metric: took 399.950752ms for pod "kube-controller-manager-ha-235073-m03" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:31.481136  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4g5ws" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:31.676296  139843 request.go:629] Waited for 195.076088ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4g5ws
	I0731 19:50:31.676360  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4g5ws
	I0731 19:50:31.676365  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:31.676377  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:31.676383  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:31.680297  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:31.876450  139843 request.go:629] Waited for 195.374626ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:50:31.876542  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:50:31.876553  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:31.876564  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:31.876571  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:31.880441  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:31.880893  139843 pod_ready.go:92] pod "kube-proxy-4g5ws" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:31.880912  139843 pod_ready.go:81] duration metric: took 399.768167ms for pod "kube-proxy-4g5ws" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:31.880925  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mkrmt" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:32.076237  139843 request.go:629] Waited for 195.239726ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mkrmt
	I0731 19:50:32.076336  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mkrmt
	I0731 19:50:32.076345  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:32.076356  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:32.076366  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:32.080695  139843 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 19:50:32.276772  139843 request.go:629] Waited for 195.403494ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:32.276829  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:32.276834  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:32.276842  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:32.276845  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:32.281369  139843 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 19:50:32.282475  139843 pod_ready.go:92] pod "kube-proxy-mkrmt" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:32.282493  139843 pod_ready.go:81] duration metric: took 401.561302ms for pod "kube-proxy-mkrmt" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:32.282502  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-td8j2" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:32.476553  139843 request.go:629] Waited for 193.98316ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-proxy-td8j2
	I0731 19:50:32.476637  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-proxy-td8j2
	I0731 19:50:32.476642  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:32.476650  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:32.476654  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:32.482107  139843 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 19:50:32.676368  139843 request.go:629] Waited for 193.352065ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:50:32.676476  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:50:32.676488  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:32.676498  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:32.676506  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:32.679741  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:32.680200  139843 pod_ready.go:92] pod "kube-proxy-td8j2" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:32.680219  139843 pod_ready.go:81] duration metric: took 397.710991ms for pod "kube-proxy-td8j2" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:32.680228  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-235073" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:32.876058  139843 request.go:629] Waited for 195.737513ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-235073
	I0731 19:50:32.876124  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-235073
	I0731 19:50:32.876132  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:32.876144  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:32.876152  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:32.879409  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:33.076543  139843 request.go:629] Waited for 196.353473ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:50:33.076601  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073
	I0731 19:50:33.076607  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:33.076614  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:33.076625  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:33.080311  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:33.080994  139843 pod_ready.go:92] pod "kube-scheduler-ha-235073" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:33.081018  139843 pod_ready.go:81] duration metric: took 400.780591ms for pod "kube-scheduler-ha-235073" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:33.081031  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-235073-m02" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:33.275821  139843 request.go:629] Waited for 194.695647ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-235073-m02
	I0731 19:50:33.275891  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-235073-m02
	I0731 19:50:33.275896  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:33.275903  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:33.275908  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:33.279206  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:33.476256  139843 request.go:629] Waited for 196.416798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:50:33.476332  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m02
	I0731 19:50:33.476339  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:33.476353  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:33.476360  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:33.479584  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:33.480216  139843 pod_ready.go:92] pod "kube-scheduler-ha-235073-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:33.480274  139843 pod_ready.go:81] duration metric: took 399.20017ms for pod "kube-scheduler-ha-235073-m02" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:33.480294  139843 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-235073-m03" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:33.676336  139843 request.go:629] Waited for 195.96373ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-235073-m03
	I0731 19:50:33.676439  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-235073-m03
	I0731 19:50:33.676456  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:33.676469  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:33.676481  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:33.680201  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:33.876453  139843 request.go:629] Waited for 195.361852ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:33.876532  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes/ha-235073-m03
	I0731 19:50:33.876537  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:33.876545  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:33.876552  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:33.880221  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:33.880823  139843 pod_ready.go:92] pod "kube-scheduler-ha-235073-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 19:50:33.880842  139843 pod_ready.go:81] duration metric: took 400.540427ms for pod "kube-scheduler-ha-235073-m03" in "kube-system" namespace to be "Ready" ...
	I0731 19:50:33.880854  139843 pod_ready.go:38] duration metric: took 5.200102871s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 19:50:33.880869  139843 api_server.go:52] waiting for apiserver process to appear ...
	I0731 19:50:33.880918  139843 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:50:33.895980  139843 api_server.go:72] duration metric: took 25.094738931s to wait for apiserver process to appear ...
	I0731 19:50:33.896009  139843 api_server.go:88] waiting for apiserver healthz status ...
	I0731 19:50:33.896033  139843 api_server.go:253] Checking apiserver healthz at https://192.168.39.146:8443/healthz ...
	I0731 19:50:33.901322  139843 api_server.go:279] https://192.168.39.146:8443/healthz returned 200:
	ok
	I0731 19:50:33.901410  139843 round_trippers.go:463] GET https://192.168.39.146:8443/version
	I0731 19:50:33.901419  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:33.901436  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:33.901442  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:33.902346  139843 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0731 19:50:33.902424  139843 api_server.go:141] control plane version: v1.30.3
	I0731 19:50:33.902439  139843 api_server.go:131] duration metric: took 6.423299ms to wait for apiserver health ...
	I0731 19:50:33.902448  139843 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 19:50:34.075781  139843 request.go:629] Waited for 173.262693ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods
	I0731 19:50:34.075861  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods
	I0731 19:50:34.075867  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:34.075877  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:34.075883  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:34.083018  139843 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 19:50:34.089685  139843 system_pods.go:59] 24 kube-system pods found
	I0731 19:50:34.089714  139843 system_pods.go:61] "coredns-7db6d8ff4d-d2w7q" [c47597b4-a38b-438c-9c3b-8f7f45130f75] Running
	I0731 19:50:34.089719  139843 system_pods.go:61] "coredns-7db6d8ff4d-f7dzt" [9549b5d7-bb23-4934-883b-dd07f8d864d8] Running
	I0731 19:50:34.089722  139843 system_pods.go:61] "etcd-ha-235073" [ef927139-ead6-413d-b0cd-beb931fc4700] Running
	I0731 19:50:34.089725  139843 system_pods.go:61] "etcd-ha-235073-m02" [2bc3b6c8-c8de-42c0-a752-302d07433ebc] Running
	I0731 19:50:34.089728  139843 system_pods.go:61] "etcd-ha-235073-m03" [b78ae13d-78b3-4250-8b6b-dc3a2bd24b53] Running
	I0731 19:50:34.089731  139843 system_pods.go:61] "kindnet-6mpsn" [1910ba34-e9d4-4fb4-9f2b-6b00dad3a3ef] Running
	I0731 19:50:34.089734  139843 system_pods.go:61] "kindnet-964d5" [c663aa92-d78d-4d55-a7e8-29bd0d67e7b6] Running
	I0731 19:50:34.089737  139843 system_pods.go:61] "kindnet-v5g92" [c8020666-5376-4bdf-a9a3-d10b67fc04a9] Running
	I0731 19:50:34.089740  139843 system_pods.go:61] "kube-apiserver-ha-235073" [c7da5168-cd07-4660-91a7-f25bf44db28e] Running
	I0731 19:50:34.089745  139843 system_pods.go:61] "kube-apiserver-ha-235073-m02" [bb498dc0-7bea-4f44-b6ea-0b66122d8205] Running
	I0731 19:50:34.089750  139843 system_pods.go:61] "kube-apiserver-ha-235073-m03" [6880f463-4838-414e-8387-7ee8c8b9f84b] Running
	I0731 19:50:34.089753  139843 system_pods.go:61] "kube-controller-manager-ha-235073" [1d7ad140-888f-4863-aa09-0651eae569a7] Running
	I0731 19:50:34.089759  139843 system_pods.go:61] "kube-controller-manager-ha-235073-m02" [7d1e23f4-1609-476f-b30e-1e18d291ca4c] Running
	I0731 19:50:34.089762  139843 system_pods.go:61] "kube-controller-manager-ha-235073-m03" [a6078f70-cd3b-48f2-a9a3-982f9d4bd67d] Running
	I0731 19:50:34.089765  139843 system_pods.go:61] "kube-proxy-4g5ws" [681015ee-d7ba-460f-a593-0152df2b065d] Running
	I0731 19:50:34.089768  139843 system_pods.go:61] "kube-proxy-mkrmt" [5f001ea6-7c3b-4edc-8f66-b107a3c0d570] Running
	I0731 19:50:34.089771  139843 system_pods.go:61] "kube-proxy-td8j2" [b836edfa-4df1-40e4-a58a-3f23afd5b78b] Running
	I0731 19:50:34.089774  139843 system_pods.go:61] "kube-scheduler-ha-235073" [597d51e9-b674-4b7f-b104-6e8808a5d593] Running
	I0731 19:50:34.089777  139843 system_pods.go:61] "kube-scheduler-ha-235073-m02" [84f686e7-4317-41b4-8064-621a7fa7ade8] Running
	I0731 19:50:34.089780  139843 system_pods.go:61] "kube-scheduler-ha-235073-m03" [ce77b19b-2862-41e5-9006-8d6667b563b8] Running
	I0731 19:50:34.089782  139843 system_pods.go:61] "kube-vip-ha-235073" [f28e113e-7c11-4a00-a8cb-fb5527042343] Running
	I0731 19:50:34.089785  139843 system_pods.go:61] "kube-vip-ha-235073-m02" [4f387765-627c-49e4-9fce-eae672099a6d] Running
	I0731 19:50:34.089788  139843 system_pods.go:61] "kube-vip-ha-235073-m03" [abd1a06b-679a-4dc7-87bf-6aa534e6f031] Running
	I0731 19:50:34.089791  139843 system_pods.go:61] "storage-provisioner" [9cd9bb70-badc-4b4b-a135-62644edac7dd] Running
	I0731 19:50:34.089797  139843 system_pods.go:74] duration metric: took 187.341454ms to wait for pod list to return data ...
	I0731 19:50:34.089806  139843 default_sa.go:34] waiting for default service account to be created ...
	I0731 19:50:34.276220  139843 request.go:629] Waited for 186.333016ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/default/serviceaccounts
	I0731 19:50:34.276288  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/default/serviceaccounts
	I0731 19:50:34.276296  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:34.276305  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:34.276311  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:34.279550  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:34.279710  139843 default_sa.go:45] found service account: "default"
	I0731 19:50:34.279732  139843 default_sa.go:55] duration metric: took 189.917872ms for default service account to be created ...
	I0731 19:50:34.279742  139843 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 19:50:34.476177  139843 request.go:629] Waited for 196.355043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods
	I0731 19:50:34.476267  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/namespaces/kube-system/pods
	I0731 19:50:34.476274  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:34.476286  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:34.476295  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:34.483165  139843 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 19:50:34.491054  139843 system_pods.go:86] 24 kube-system pods found
	I0731 19:50:34.491083  139843 system_pods.go:89] "coredns-7db6d8ff4d-d2w7q" [c47597b4-a38b-438c-9c3b-8f7f45130f75] Running
	I0731 19:50:34.491088  139843 system_pods.go:89] "coredns-7db6d8ff4d-f7dzt" [9549b5d7-bb23-4934-883b-dd07f8d864d8] Running
	I0731 19:50:34.491093  139843 system_pods.go:89] "etcd-ha-235073" [ef927139-ead6-413d-b0cd-beb931fc4700] Running
	I0731 19:50:34.491097  139843 system_pods.go:89] "etcd-ha-235073-m02" [2bc3b6c8-c8de-42c0-a752-302d07433ebc] Running
	I0731 19:50:34.491101  139843 system_pods.go:89] "etcd-ha-235073-m03" [b78ae13d-78b3-4250-8b6b-dc3a2bd24b53] Running
	I0731 19:50:34.491104  139843 system_pods.go:89] "kindnet-6mpsn" [1910ba34-e9d4-4fb4-9f2b-6b00dad3a3ef] Running
	I0731 19:50:34.491108  139843 system_pods.go:89] "kindnet-964d5" [c663aa92-d78d-4d55-a7e8-29bd0d67e7b6] Running
	I0731 19:50:34.491112  139843 system_pods.go:89] "kindnet-v5g92" [c8020666-5376-4bdf-a9a3-d10b67fc04a9] Running
	I0731 19:50:34.491115  139843 system_pods.go:89] "kube-apiserver-ha-235073" [c7da5168-cd07-4660-91a7-f25bf44db28e] Running
	I0731 19:50:34.491119  139843 system_pods.go:89] "kube-apiserver-ha-235073-m02" [bb498dc0-7bea-4f44-b6ea-0b66122d8205] Running
	I0731 19:50:34.491123  139843 system_pods.go:89] "kube-apiserver-ha-235073-m03" [6880f463-4838-414e-8387-7ee8c8b9f84b] Running
	I0731 19:50:34.491127  139843 system_pods.go:89] "kube-controller-manager-ha-235073" [1d7ad140-888f-4863-aa09-0651eae569a7] Running
	I0731 19:50:34.491131  139843 system_pods.go:89] "kube-controller-manager-ha-235073-m02" [7d1e23f4-1609-476f-b30e-1e18d291ca4c] Running
	I0731 19:50:34.491137  139843 system_pods.go:89] "kube-controller-manager-ha-235073-m03" [a6078f70-cd3b-48f2-a9a3-982f9d4bd67d] Running
	I0731 19:50:34.491141  139843 system_pods.go:89] "kube-proxy-4g5ws" [681015ee-d7ba-460f-a593-0152df2b065d] Running
	I0731 19:50:34.491145  139843 system_pods.go:89] "kube-proxy-mkrmt" [5f001ea6-7c3b-4edc-8f66-b107a3c0d570] Running
	I0731 19:50:34.491148  139843 system_pods.go:89] "kube-proxy-td8j2" [b836edfa-4df1-40e4-a58a-3f23afd5b78b] Running
	I0731 19:50:34.491152  139843 system_pods.go:89] "kube-scheduler-ha-235073" [597d51e9-b674-4b7f-b104-6e8808a5d593] Running
	I0731 19:50:34.491156  139843 system_pods.go:89] "kube-scheduler-ha-235073-m02" [84f686e7-4317-41b4-8064-621a7fa7ade8] Running
	I0731 19:50:34.491162  139843 system_pods.go:89] "kube-scheduler-ha-235073-m03" [ce77b19b-2862-41e5-9006-8d6667b563b8] Running
	I0731 19:50:34.491166  139843 system_pods.go:89] "kube-vip-ha-235073" [f28e113e-7c11-4a00-a8cb-fb5527042343] Running
	I0731 19:50:34.491170  139843 system_pods.go:89] "kube-vip-ha-235073-m02" [4f387765-627c-49e4-9fce-eae672099a6d] Running
	I0731 19:50:34.491176  139843 system_pods.go:89] "kube-vip-ha-235073-m03" [abd1a06b-679a-4dc7-87bf-6aa534e6f031] Running
	I0731 19:50:34.491180  139843 system_pods.go:89] "storage-provisioner" [9cd9bb70-badc-4b4b-a135-62644edac7dd] Running
	I0731 19:50:34.491187  139843 system_pods.go:126] duration metric: took 211.436551ms to wait for k8s-apps to be running ...
	I0731 19:50:34.491198  139843 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 19:50:34.491244  139843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:50:34.507738  139843 system_svc.go:56] duration metric: took 16.527395ms WaitForService to wait for kubelet
	I0731 19:50:34.507770  139843 kubeadm.go:582] duration metric: took 25.706532708s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 19:50:34.507788  139843 node_conditions.go:102] verifying NodePressure condition ...
	I0731 19:50:34.676176  139843 request.go:629] Waited for 168.292834ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.146:8443/api/v1/nodes
	I0731 19:50:34.676232  139843 round_trippers.go:463] GET https://192.168.39.146:8443/api/v1/nodes
	I0731 19:50:34.676237  139843 round_trippers.go:469] Request Headers:
	I0731 19:50:34.676244  139843 round_trippers.go:473]     Accept: application/json, */*
	I0731 19:50:34.676248  139843 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 19:50:34.679777  139843 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 19:50:34.680927  139843 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 19:50:34.680947  139843 node_conditions.go:123] node cpu capacity is 2
	I0731 19:50:34.680959  139843 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 19:50:34.680966  139843 node_conditions.go:123] node cpu capacity is 2
	I0731 19:50:34.680972  139843 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 19:50:34.680979  139843 node_conditions.go:123] node cpu capacity is 2
	I0731 19:50:34.680987  139843 node_conditions.go:105] duration metric: took 173.19318ms to run NodePressure ...
	I0731 19:50:34.681007  139843 start.go:241] waiting for startup goroutines ...
	I0731 19:50:34.681030  139843 start.go:255] writing updated cluster config ...
	I0731 19:50:34.681371  139843 ssh_runner.go:195] Run: rm -f paused
	I0731 19:50:34.732057  139843 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 19:50:34.734140  139843 out.go:177] * Done! kubectl is now configured to use "ha-235073" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 19:55:14 ha-235073 crio[680]: time="2024-07-31 19:55:14.294735823Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722455714294713231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=494a2966-8d18-408b-8de7-78669fa9729d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:55:14 ha-235073 crio[680]: time="2024-07-31 19:55:14.295296957Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=48ebe115-1c63-4aeb-bb10-d09e4c47d798 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:55:14 ha-235073 crio[680]: time="2024-07-31 19:55:14.295349196Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=48ebe115-1c63-4aeb-bb10-d09e4c47d798 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:55:14 ha-235073 crio[680]: time="2024-07-31 19:55:14.295601914Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:36d67125ccdbad5f98a9142c81bc6585651ec4059eed554dfbe1f5cb5be99c60,PodSandboxId:6c4d1efc4989e0b2aa28c3cbda2d3f5d4b5e9252f3d7895297f54307dc7ab9f6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722455438711049436,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-g9vds,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1b34d06-e944-4236-afe0-1ee06ba4e666,},Annotations:map[string]string{io.kubernetes.container.hash: 212b9e5e,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ddbd3f3cc5f71da35c6165c799a35b8d72e224de7a0c4687a173f4de880f22,PodSandboxId:55ec4971c2e64ac2d6f9784d622423e7577ab445afbc4f722a284ada62c68dc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722455228102873852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2w7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47597b4-a38b-438c-9c3b-8f7f45130f75,},Annotations:map[string]string{io.kubernetes.container.hash: b29b8cc6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eba82db411e0f901dff59f98c9e5ae0d5213285233844742c5879ce5b6232f35,PodSandboxId:714a1d887a6e7a6aa0abbfaae3c16b878224596f43f32beb43f080809e9ffd58,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722455228083526798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 9cd9bb70-badc-4b4b-a135-62644edac7dd,},Annotations:map[string]string{io.kubernetes.container.hash: b5b39576,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30540ee956135e961a2eeabdc4f234f18455f75bb21b66afeef232ca2805dd90,PodSandboxId:231aebfc0631bd83287e33a69afbca01d1373f4b939d8bf65f4ac794aaf52012,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722455228031037861,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f7dzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9549b5d7-bb
23-4934-883b-dd07f8d864d8,},Annotations:map[string]string{io.kubernetes.container.hash: b9bec004,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee50c4b9e239496652640464181ee3dd4a9d21d5a5123d1d17fbb7a50e29dc1a,PodSandboxId:feeccc2a1a3e7406eaea5ea90a171a8661853a60ee426f70434f26aac0f00112,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722455215945081182,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6mpsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1910ba34-e9d4-4fb4-9f2b-6b00dad3a3ef,},Annotations:map[string]string{io.kubernetes.container.hash: 390f5316,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8811952c6253860c880f1b6a403fc858c3e6399e99e9a825cf0b05f791ebf3ac,PodSandboxId:dbf6b114c5cb5c19782cdc6e76a5915ae8e26cd82af58e80687fa0f5d3e199fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172245521
1859729190,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-td8j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b836edfa-4df1-40e4-a58a-3f23afd5b78b,},Annotations:map[string]string{io.kubernetes.container.hash: 33a934d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c31d2ba10cadb13f4b888c49e2a6934e94344684dfc2adf6833c2d1dc0993929,PodSandboxId:1174f1364f26d10dc051aa73fa255a606ad9bf503fcd115b3a9cbc5ca9742116,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17224551953
39861966,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b9131be600867c5ba2b1d2ffd206e40,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d642debf242fb07cc792132a55eddbb1c15f26311a8d754a5d8fe06c8b598ae,PodSandboxId:58bfb1289eb0474e39bf94aa0b78076816c68c357787822692fe3617a82226cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722455191497802122,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b69e7963c2d0df8833554e4876687c49,},Annotations:map[string]string{io.kubernetes.container.hash: d16169dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216984c6b7d59f01f104316e0505bcc5baf47b8d8b5b230f6641c7dd73533498,PodSandboxId:c9f1bb2690babd5d10c24de1404c57fafca2597594caac9dd0c659d72a8b552d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722455191481530968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adddf646550f2cb39fef0b7f6c02c656,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0877f308475d05ee771157aab5de9f3da07eec38a21c9a74d76bde2eb4de77,PodSandboxId:d2fb34888cbe775dce80bba1d1d7d8b4559159e4e1a7e8694d7d5e67f5d58e2f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722455191397982754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: ku
be-apiserver-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a308afa2b137aad975d5e22dcabd17,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6ae1a1aafd356067a53de9e770b37736ea4c621cb6bf29821cca1c4488aa31e,PodSandboxId:13ce57fab67b3276bebda32167ce6dffb6760a77b9289da77056562f62051eda,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722455191372229593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f5277261cc3e0ac6eb43af812478f1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=48ebe115-1c63-4aeb-bb10-d09e4c47d798 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:55:14 ha-235073 crio[680]: time="2024-07-31 19:55:14.334235633Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=983c61a3-9850-47e0-bb6f-2b9c6d9bd05b name=/runtime.v1.RuntimeService/Version
	Jul 31 19:55:14 ha-235073 crio[680]: time="2024-07-31 19:55:14.334311421Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=983c61a3-9850-47e0-bb6f-2b9c6d9bd05b name=/runtime.v1.RuntimeService/Version
	Jul 31 19:55:14 ha-235073 crio[680]: time="2024-07-31 19:55:14.335350136Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9728971e-32be-4603-ae57-1438c40e136c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:55:14 ha-235073 crio[680]: time="2024-07-31 19:55:14.335855864Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722455714335832314,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9728971e-32be-4603-ae57-1438c40e136c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:55:14 ha-235073 crio[680]: time="2024-07-31 19:55:14.336485986Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a9b6f5c4-c474-4721-85b5-1daa70341e0c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:55:14 ha-235073 crio[680]: time="2024-07-31 19:55:14.336539832Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a9b6f5c4-c474-4721-85b5-1daa70341e0c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:55:14 ha-235073 crio[680]: time="2024-07-31 19:55:14.336787930Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:36d67125ccdbad5f98a9142c81bc6585651ec4059eed554dfbe1f5cb5be99c60,PodSandboxId:6c4d1efc4989e0b2aa28c3cbda2d3f5d4b5e9252f3d7895297f54307dc7ab9f6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722455438711049436,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-g9vds,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1b34d06-e944-4236-afe0-1ee06ba4e666,},Annotations:map[string]string{io.kubernetes.container.hash: 212b9e5e,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ddbd3f3cc5f71da35c6165c799a35b8d72e224de7a0c4687a173f4de880f22,PodSandboxId:55ec4971c2e64ac2d6f9784d622423e7577ab445afbc4f722a284ada62c68dc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722455228102873852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2w7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47597b4-a38b-438c-9c3b-8f7f45130f75,},Annotations:map[string]string{io.kubernetes.container.hash: b29b8cc6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eba82db411e0f901dff59f98c9e5ae0d5213285233844742c5879ce5b6232f35,PodSandboxId:714a1d887a6e7a6aa0abbfaae3c16b878224596f43f32beb43f080809e9ffd58,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722455228083526798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 9cd9bb70-badc-4b4b-a135-62644edac7dd,},Annotations:map[string]string{io.kubernetes.container.hash: b5b39576,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30540ee956135e961a2eeabdc4f234f18455f75bb21b66afeef232ca2805dd90,PodSandboxId:231aebfc0631bd83287e33a69afbca01d1373f4b939d8bf65f4ac794aaf52012,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722455228031037861,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f7dzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9549b5d7-bb
23-4934-883b-dd07f8d864d8,},Annotations:map[string]string{io.kubernetes.container.hash: b9bec004,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee50c4b9e239496652640464181ee3dd4a9d21d5a5123d1d17fbb7a50e29dc1a,PodSandboxId:feeccc2a1a3e7406eaea5ea90a171a8661853a60ee426f70434f26aac0f00112,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722455215945081182,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6mpsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1910ba34-e9d4-4fb4-9f2b-6b00dad3a3ef,},Annotations:map[string]string{io.kubernetes.container.hash: 390f5316,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8811952c6253860c880f1b6a403fc858c3e6399e99e9a825cf0b05f791ebf3ac,PodSandboxId:dbf6b114c5cb5c19782cdc6e76a5915ae8e26cd82af58e80687fa0f5d3e199fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172245521
1859729190,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-td8j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b836edfa-4df1-40e4-a58a-3f23afd5b78b,},Annotations:map[string]string{io.kubernetes.container.hash: 33a934d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c31d2ba10cadb13f4b888c49e2a6934e94344684dfc2adf6833c2d1dc0993929,PodSandboxId:1174f1364f26d10dc051aa73fa255a606ad9bf503fcd115b3a9cbc5ca9742116,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17224551953
39861966,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b9131be600867c5ba2b1d2ffd206e40,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d642debf242fb07cc792132a55eddbb1c15f26311a8d754a5d8fe06c8b598ae,PodSandboxId:58bfb1289eb0474e39bf94aa0b78076816c68c357787822692fe3617a82226cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722455191497802122,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b69e7963c2d0df8833554e4876687c49,},Annotations:map[string]string{io.kubernetes.container.hash: d16169dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216984c6b7d59f01f104316e0505bcc5baf47b8d8b5b230f6641c7dd73533498,PodSandboxId:c9f1bb2690babd5d10c24de1404c57fafca2597594caac9dd0c659d72a8b552d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722455191481530968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adddf646550f2cb39fef0b7f6c02c656,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0877f308475d05ee771157aab5de9f3da07eec38a21c9a74d76bde2eb4de77,PodSandboxId:d2fb34888cbe775dce80bba1d1d7d8b4559159e4e1a7e8694d7d5e67f5d58e2f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722455191397982754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: ku
be-apiserver-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a308afa2b137aad975d5e22dcabd17,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6ae1a1aafd356067a53de9e770b37736ea4c621cb6bf29821cca1c4488aa31e,PodSandboxId:13ce57fab67b3276bebda32167ce6dffb6760a77b9289da77056562f62051eda,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722455191372229593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f5277261cc3e0ac6eb43af812478f1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a9b6f5c4-c474-4721-85b5-1daa70341e0c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:55:14 ha-235073 crio[680]: time="2024-07-31 19:55:14.375787776Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4d978667-8fd8-44e5-8f8d-deb736379571 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:55:14 ha-235073 crio[680]: time="2024-07-31 19:55:14.375875976Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4d978667-8fd8-44e5-8f8d-deb736379571 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:55:14 ha-235073 crio[680]: time="2024-07-31 19:55:14.377310756Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=60c5dbfb-a737-4254-ba1c-986a072fa5fd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:55:14 ha-235073 crio[680]: time="2024-07-31 19:55:14.377847642Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722455714377825882,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=60c5dbfb-a737-4254-ba1c-986a072fa5fd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:55:14 ha-235073 crio[680]: time="2024-07-31 19:55:14.378866203Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=86f6a448-12fd-4c11-86cc-c99508a36132 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:55:14 ha-235073 crio[680]: time="2024-07-31 19:55:14.378938379Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=86f6a448-12fd-4c11-86cc-c99508a36132 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:55:14 ha-235073 crio[680]: time="2024-07-31 19:55:14.379285659Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:36d67125ccdbad5f98a9142c81bc6585651ec4059eed554dfbe1f5cb5be99c60,PodSandboxId:6c4d1efc4989e0b2aa28c3cbda2d3f5d4b5e9252f3d7895297f54307dc7ab9f6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722455438711049436,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-g9vds,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1b34d06-e944-4236-afe0-1ee06ba4e666,},Annotations:map[string]string{io.kubernetes.container.hash: 212b9e5e,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ddbd3f3cc5f71da35c6165c799a35b8d72e224de7a0c4687a173f4de880f22,PodSandboxId:55ec4971c2e64ac2d6f9784d622423e7577ab445afbc4f722a284ada62c68dc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722455228102873852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2w7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47597b4-a38b-438c-9c3b-8f7f45130f75,},Annotations:map[string]string{io.kubernetes.container.hash: b29b8cc6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eba82db411e0f901dff59f98c9e5ae0d5213285233844742c5879ce5b6232f35,PodSandboxId:714a1d887a6e7a6aa0abbfaae3c16b878224596f43f32beb43f080809e9ffd58,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722455228083526798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 9cd9bb70-badc-4b4b-a135-62644edac7dd,},Annotations:map[string]string{io.kubernetes.container.hash: b5b39576,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30540ee956135e961a2eeabdc4f234f18455f75bb21b66afeef232ca2805dd90,PodSandboxId:231aebfc0631bd83287e33a69afbca01d1373f4b939d8bf65f4ac794aaf52012,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722455228031037861,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f7dzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9549b5d7-bb
23-4934-883b-dd07f8d864d8,},Annotations:map[string]string{io.kubernetes.container.hash: b9bec004,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee50c4b9e239496652640464181ee3dd4a9d21d5a5123d1d17fbb7a50e29dc1a,PodSandboxId:feeccc2a1a3e7406eaea5ea90a171a8661853a60ee426f70434f26aac0f00112,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722455215945081182,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6mpsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1910ba34-e9d4-4fb4-9f2b-6b00dad3a3ef,},Annotations:map[string]string{io.kubernetes.container.hash: 390f5316,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8811952c6253860c880f1b6a403fc858c3e6399e99e9a825cf0b05f791ebf3ac,PodSandboxId:dbf6b114c5cb5c19782cdc6e76a5915ae8e26cd82af58e80687fa0f5d3e199fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172245521
1859729190,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-td8j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b836edfa-4df1-40e4-a58a-3f23afd5b78b,},Annotations:map[string]string{io.kubernetes.container.hash: 33a934d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c31d2ba10cadb13f4b888c49e2a6934e94344684dfc2adf6833c2d1dc0993929,PodSandboxId:1174f1364f26d10dc051aa73fa255a606ad9bf503fcd115b3a9cbc5ca9742116,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17224551953
39861966,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b9131be600867c5ba2b1d2ffd206e40,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d642debf242fb07cc792132a55eddbb1c15f26311a8d754a5d8fe06c8b598ae,PodSandboxId:58bfb1289eb0474e39bf94aa0b78076816c68c357787822692fe3617a82226cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722455191497802122,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b69e7963c2d0df8833554e4876687c49,},Annotations:map[string]string{io.kubernetes.container.hash: d16169dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216984c6b7d59f01f104316e0505bcc5baf47b8d8b5b230f6641c7dd73533498,PodSandboxId:c9f1bb2690babd5d10c24de1404c57fafca2597594caac9dd0c659d72a8b552d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722455191481530968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adddf646550f2cb39fef0b7f6c02c656,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0877f308475d05ee771157aab5de9f3da07eec38a21c9a74d76bde2eb4de77,PodSandboxId:d2fb34888cbe775dce80bba1d1d7d8b4559159e4e1a7e8694d7d5e67f5d58e2f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722455191397982754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: ku
be-apiserver-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a308afa2b137aad975d5e22dcabd17,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6ae1a1aafd356067a53de9e770b37736ea4c621cb6bf29821cca1c4488aa31e,PodSandboxId:13ce57fab67b3276bebda32167ce6dffb6760a77b9289da77056562f62051eda,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722455191372229593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f5277261cc3e0ac6eb43af812478f1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=86f6a448-12fd-4c11-86cc-c99508a36132 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:55:14 ha-235073 crio[680]: time="2024-07-31 19:55:14.417933652Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=51069303-a0c1-405f-88ed-585f57cfd33b name=/runtime.v1.RuntimeService/Version
	Jul 31 19:55:14 ha-235073 crio[680]: time="2024-07-31 19:55:14.418005990Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=51069303-a0c1-405f-88ed-585f57cfd33b name=/runtime.v1.RuntimeService/Version
	Jul 31 19:55:14 ha-235073 crio[680]: time="2024-07-31 19:55:14.418967215Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bf613e3e-6997-4105-b905-43792ef4e1ae name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:55:14 ha-235073 crio[680]: time="2024-07-31 19:55:14.419541149Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722455714419513645,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bf613e3e-6997-4105-b905-43792ef4e1ae name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:55:14 ha-235073 crio[680]: time="2024-07-31 19:55:14.419988527Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a9c978e5-4a13-462b-b222-bf46697b842e name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:55:14 ha-235073 crio[680]: time="2024-07-31 19:55:14.420040298Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a9c978e5-4a13-462b-b222-bf46697b842e name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:55:14 ha-235073 crio[680]: time="2024-07-31 19:55:14.420373535Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:36d67125ccdbad5f98a9142c81bc6585651ec4059eed554dfbe1f5cb5be99c60,PodSandboxId:6c4d1efc4989e0b2aa28c3cbda2d3f5d4b5e9252f3d7895297f54307dc7ab9f6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722455438711049436,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-g9vds,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1b34d06-e944-4236-afe0-1ee06ba4e666,},Annotations:map[string]string{io.kubernetes.container.hash: 212b9e5e,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ddbd3f3cc5f71da35c6165c799a35b8d72e224de7a0c4687a173f4de880f22,PodSandboxId:55ec4971c2e64ac2d6f9784d622423e7577ab445afbc4f722a284ada62c68dc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722455228102873852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2w7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47597b4-a38b-438c-9c3b-8f7f45130f75,},Annotations:map[string]string{io.kubernetes.container.hash: b29b8cc6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eba82db411e0f901dff59f98c9e5ae0d5213285233844742c5879ce5b6232f35,PodSandboxId:714a1d887a6e7a6aa0abbfaae3c16b878224596f43f32beb43f080809e9ffd58,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722455228083526798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 9cd9bb70-badc-4b4b-a135-62644edac7dd,},Annotations:map[string]string{io.kubernetes.container.hash: b5b39576,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30540ee956135e961a2eeabdc4f234f18455f75bb21b66afeef232ca2805dd90,PodSandboxId:231aebfc0631bd83287e33a69afbca01d1373f4b939d8bf65f4ac794aaf52012,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722455228031037861,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f7dzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9549b5d7-bb
23-4934-883b-dd07f8d864d8,},Annotations:map[string]string{io.kubernetes.container.hash: b9bec004,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee50c4b9e239496652640464181ee3dd4a9d21d5a5123d1d17fbb7a50e29dc1a,PodSandboxId:feeccc2a1a3e7406eaea5ea90a171a8661853a60ee426f70434f26aac0f00112,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722455215945081182,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6mpsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1910ba34-e9d4-4fb4-9f2b-6b00dad3a3ef,},Annotations:map[string]string{io.kubernetes.container.hash: 390f5316,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8811952c6253860c880f1b6a403fc858c3e6399e99e9a825cf0b05f791ebf3ac,PodSandboxId:dbf6b114c5cb5c19782cdc6e76a5915ae8e26cd82af58e80687fa0f5d3e199fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172245521
1859729190,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-td8j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b836edfa-4df1-40e4-a58a-3f23afd5b78b,},Annotations:map[string]string{io.kubernetes.container.hash: 33a934d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c31d2ba10cadb13f4b888c49e2a6934e94344684dfc2adf6833c2d1dc0993929,PodSandboxId:1174f1364f26d10dc051aa73fa255a606ad9bf503fcd115b3a9cbc5ca9742116,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17224551953
39861966,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b9131be600867c5ba2b1d2ffd206e40,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d642debf242fb07cc792132a55eddbb1c15f26311a8d754a5d8fe06c8b598ae,PodSandboxId:58bfb1289eb0474e39bf94aa0b78076816c68c357787822692fe3617a82226cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722455191497802122,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b69e7963c2d0df8833554e4876687c49,},Annotations:map[string]string{io.kubernetes.container.hash: d16169dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216984c6b7d59f01f104316e0505bcc5baf47b8d8b5b230f6641c7dd73533498,PodSandboxId:c9f1bb2690babd5d10c24de1404c57fafca2597594caac9dd0c659d72a8b552d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722455191481530968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adddf646550f2cb39fef0b7f6c02c656,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0877f308475d05ee771157aab5de9f3da07eec38a21c9a74d76bde2eb4de77,PodSandboxId:d2fb34888cbe775dce80bba1d1d7d8b4559159e4e1a7e8694d7d5e67f5d58e2f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722455191397982754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: ku
be-apiserver-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a308afa2b137aad975d5e22dcabd17,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6ae1a1aafd356067a53de9e770b37736ea4c621cb6bf29821cca1c4488aa31e,PodSandboxId:13ce57fab67b3276bebda32167ce6dffb6760a77b9289da77056562f62051eda,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722455191372229593,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f5277261cc3e0ac6eb43af812478f1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a9c978e5-4a13-462b-b222-bf46697b842e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	36d67125ccdba       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   6c4d1efc4989e       busybox-fc5497c4f-g9vds
	a9ddbd3f3cc5f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago       Running             coredns                   0                   55ec4971c2e64       coredns-7db6d8ff4d-d2w7q
	eba82db411e0f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Running             storage-provisioner       0                   714a1d887a6e7       storage-provisioner
	30540ee956135       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago       Running             coredns                   0                   231aebfc0631b       coredns-7db6d8ff4d-f7dzt
	ee50c4b9e2394       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    8 minutes ago       Running             kindnet-cni               0                   feeccc2a1a3e7       kindnet-6mpsn
	8811952c62538       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago       Running             kube-proxy                0                   dbf6b114c5cb5       kube-proxy-td8j2
	c31d2ba10cadb       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     8 minutes ago       Running             kube-vip                  0                   1174f1364f26d       kube-vip-ha-235073
	9d642debf242f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago       Running             etcd                      0                   58bfb1289eb04       etcd-ha-235073
	216984c6b7d59       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago       Running             kube-scheduler            0                   c9f1bb2690bab       kube-scheduler-ha-235073
	cf0877f308475       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago       Running             kube-apiserver            0                   d2fb34888cbe7       kube-apiserver-ha-235073
	c6ae1a1aafd35       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago       Running             kube-controller-manager   0                   13ce57fab67b3       kube-controller-manager-ha-235073
	
	
	==> coredns [30540ee956135e961a2eeabdc4f234f18455f75bb21b66afeef232ca2805dd90] <==
	[INFO] 10.244.2.2:36658 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216258s
	[INFO] 10.244.2.2:43101 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.00049202s
	[INFO] 10.244.1.2:41993 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131309s
	[INFO] 10.244.1.2:58295 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000204788s
	[INFO] 10.244.1.2:43074 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000178134s
	[INFO] 10.244.1.2:46950 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000165895s
	[INFO] 10.244.1.2:60484 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143802s
	[INFO] 10.244.0.4:58480 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129276s
	[INFO] 10.244.2.2:36458 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001308986s
	[INFO] 10.244.2.2:48644 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094253s
	[INFO] 10.244.1.2:34972 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151042s
	[INFO] 10.244.1.2:32819 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096017s
	[INFO] 10.244.1.2:48157 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075225s
	[INFO] 10.244.0.4:54613 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084738s
	[INFO] 10.244.0.4:60576 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000829s
	[INFO] 10.244.2.2:36544 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164516s
	[INFO] 10.244.2.2:45708 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142016s
	[INFO] 10.244.2.2:40736 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110815s
	[INFO] 10.244.2.2:36751 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104862s
	[INFO] 10.244.1.2:54006 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000448605s
	[INFO] 10.244.1.2:59479 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121156s
	[INFO] 10.244.0.4:33169 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000051358s
	[INFO] 10.244.2.2:44195 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135177s
	[INFO] 10.244.2.2:36586 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000153451s
	[INFO] 10.244.2.2:56302 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000124509s
	
	
	==> coredns [a9ddbd3f3cc5f71da35c6165c799a35b8d72e224de7a0c4687a173f4de880f22] <==
	[INFO] 10.244.1.2:40987 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.006648733s
	[INFO] 10.244.1.2:56046 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000294533s
	[INFO] 10.244.1.2:34815 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.013751047s
	[INFO] 10.244.0.4:38669 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014517s
	[INFO] 10.244.0.4:47964 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002016491s
	[INFO] 10.244.0.4:48652 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000071321s
	[INFO] 10.244.0.4:47729 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081815s
	[INFO] 10.244.0.4:55084 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001248993s
	[INFO] 10.244.0.4:57805 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076977s
	[INFO] 10.244.0.4:57456 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085752s
	[INFO] 10.244.2.2:38902 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010955s
	[INFO] 10.244.2.2:36166 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001717036s
	[INFO] 10.244.2.2:32959 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000086137s
	[INFO] 10.244.2.2:56090 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000064343s
	[INFO] 10.244.2.2:53218 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067616s
	[INFO] 10.244.2.2:56028 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000210727s
	[INFO] 10.244.1.2:41979 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111285s
	[INFO] 10.244.0.4:50255 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014493s
	[INFO] 10.244.0.4:37511 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000157288s
	[INFO] 10.244.1.2:42868 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000222662s
	[INFO] 10.244.1.2:42728 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000124693s
	[INFO] 10.244.0.4:54532 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00008837s
	[INFO] 10.244.0.4:52959 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000063732s
	[INFO] 10.244.0.4:56087 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000045645s
	[INFO] 10.244.2.2:42350 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000130124s
	
	
	==> describe nodes <==
	Name:               ha-235073
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-235073
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825
	                    minikube.k8s.io/name=ha-235073
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T19_46_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 19:46:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-235073
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 19:55:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 19:51:13 +0000   Wed, 31 Jul 2024 19:46:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 19:51:13 +0000   Wed, 31 Jul 2024 19:46:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 19:51:13 +0000   Wed, 31 Jul 2024 19:46:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 19:51:13 +0000   Wed, 31 Jul 2024 19:47:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.146
	  Hostname:    ha-235073
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e35869b5bfb347c6a5e12e63b257d2a1
	  System UUID:                e35869b5-bfb3-47c6-a5e1-2e63b257d2a1
	  Boot ID:                    846162a9-11ef-48d0-b284-9320ff7be7d0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-g9vds              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 coredns-7db6d8ff4d-d2w7q             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m23s
	  kube-system                 coredns-7db6d8ff4d-f7dzt             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m23s
	  kube-system                 etcd-ha-235073                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m37s
	  kube-system                 kindnet-6mpsn                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m23s
	  kube-system                 kube-apiserver-ha-235073             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m37s
	  kube-system                 kube-controller-manager-ha-235073    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m37s
	  kube-system                 kube-proxy-td8j2                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m23s
	  kube-system                 kube-scheduler-ha-235073             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m37s
	  kube-system                 kube-vip-ha-235073                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m37s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m22s  kube-proxy       
	  Normal  Starting                 8m37s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m37s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m37s  kubelet          Node ha-235073 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m37s  kubelet          Node ha-235073 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m37s  kubelet          Node ha-235073 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m24s  node-controller  Node ha-235073 event: Registered Node ha-235073 in Controller
	  Normal  NodeReady                8m7s   kubelet          Node ha-235073 status is now: NodeReady
	  Normal  RegisteredNode           6m11s  node-controller  Node ha-235073 event: Registered Node ha-235073 in Controller
	  Normal  RegisteredNode           4m52s  node-controller  Node ha-235073 event: Registered Node ha-235073 in Controller
	
	
	Name:               ha-235073-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-235073-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825
	                    minikube.k8s.io/name=ha-235073
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T19_48_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 19:48:46 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-235073-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 19:51:49 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 31 Jul 2024 19:50:49 +0000   Wed, 31 Jul 2024 19:52:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 31 Jul 2024 19:50:49 +0000   Wed, 31 Jul 2024 19:52:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 31 Jul 2024 19:50:49 +0000   Wed, 31 Jul 2024 19:52:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 31 Jul 2024 19:50:49 +0000   Wed, 31 Jul 2024 19:52:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-235073-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 55b090e5d4e04e9e843bceddcf4718db
	  System UUID:                55b090e5-d4e0-4e9e-843b-ceddcf4718db
	  Boot ID:                    60d7bb83-3d4a-4e10-bd0e-552a47937425
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-d7lpt                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 etcd-ha-235073-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m26s
	  kube-system                 kindnet-v5g92                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m28s
	  kube-system                 kube-apiserver-ha-235073-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m26s
	  kube-system                 kube-controller-manager-ha-235073-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m21s
	  kube-system                 kube-proxy-4g5ws                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m28s
	  kube-system                 kube-scheduler-ha-235073-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m26s
	  kube-system                 kube-vip-ha-235073-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m23s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m28s (x8 over 6m28s)  kubelet          Node ha-235073-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m28s (x8 over 6m28s)  kubelet          Node ha-235073-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m28s (x7 over 6m28s)  kubelet          Node ha-235073-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m24s                  node-controller  Node ha-235073-m02 event: Registered Node ha-235073-m02 in Controller
	  Normal  RegisteredNode           6m11s                  node-controller  Node ha-235073-m02 event: Registered Node ha-235073-m02 in Controller
	  Normal  RegisteredNode           4m52s                  node-controller  Node ha-235073-m02 event: Registered Node ha-235073-m02 in Controller
	  Normal  NodeNotReady             2m44s                  node-controller  Node ha-235073-m02 status is now: NodeNotReady
	
	
	Name:               ha-235073-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-235073-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825
	                    minikube.k8s.io/name=ha-235073
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T19_50_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 19:50:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-235073-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 19:55:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 19:51:06 +0000   Wed, 31 Jul 2024 19:50:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 19:51:06 +0000   Wed, 31 Jul 2024 19:50:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 19:51:06 +0000   Wed, 31 Jul 2024 19:50:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 19:51:06 +0000   Wed, 31 Jul 2024 19:50:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.136
	  Hostname:    ha-235073-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e9779a35d74b41fd9b9796249c8a5396
	  System UUID:                e9779a35-d74b-41fd-9b97-96249c8a5396
	  Boot ID:                    e1dbb3c6-f968-4c2c-9a34-7c1181741d49
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wqc9h                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 etcd-ha-235073-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m7s
	  kube-system                 kindnet-964d5                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m9s
	  kube-system                 kube-apiserver-ha-235073-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m7s
	  kube-system                 kube-controller-manager-ha-235073-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m7s
	  kube-system                 kube-proxy-mkrmt                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m9s
	  kube-system                 kube-scheduler-ha-235073-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-vip-ha-235073-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m5s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  5m9s (x8 over 5m9s)  kubelet          Node ha-235073-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m9s (x8 over 5m9s)  kubelet          Node ha-235073-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m9s (x7 over 5m9s)  kubelet          Node ha-235073-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m6s                 node-controller  Node ha-235073-m03 event: Registered Node ha-235073-m03 in Controller
	  Normal  RegisteredNode           5m4s                 node-controller  Node ha-235073-m03 event: Registered Node ha-235073-m03 in Controller
	  Normal  RegisteredNode           4m52s                node-controller  Node ha-235073-m03 event: Registered Node ha-235073-m03 in Controller
	
	
	Name:               ha-235073-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-235073-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825
	                    minikube.k8s.io/name=ha-235073
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T19_51_11_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 19:51:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-235073-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 19:55:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 19:51:42 +0000   Wed, 31 Jul 2024 19:51:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 19:51:42 +0000   Wed, 31 Jul 2024 19:51:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 19:51:42 +0000   Wed, 31 Jul 2024 19:51:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 19:51:42 +0000   Wed, 31 Jul 2024 19:51:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.62
	  Hostname:    ha-235073-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f0f8c10839cf446c8b0628fe1b69511a
	  System UUID:                f0f8c108-39cf-446c-8b06-28fe1b69511a
	  Boot ID:                    543e1880-ee64-4732-a58d-5bb5b1549018
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2gzbj       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m3s
	  kube-system                 kube-proxy-jb89g    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m58s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m4s (x2 over 4m4s)  kubelet          Node ha-235073-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m4s (x2 over 4m4s)  kubelet          Node ha-235073-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m4s (x2 over 4m4s)  kubelet          Node ha-235073-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m2s                 node-controller  Node ha-235073-m04 event: Registered Node ha-235073-m04 in Controller
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-235073-m04 event: Registered Node ha-235073-m04 in Controller
	  Normal  RegisteredNode           3m59s                node-controller  Node ha-235073-m04 event: Registered Node ha-235073-m04 in Controller
	  Normal  NodeReady                3m43s                kubelet          Node ha-235073-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul31 19:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051288] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039898] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.757176] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.430789] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.593096] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.152359] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.063310] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060385] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.158302] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.127644] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.264376] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.129943] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +5.303318] systemd-fstab-generator[955]: Ignoring "noauto" option for root device
	[  +0.056828] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.179861] kauditd_printk_skb: 74 callbacks suppressed
	[  +2.138103] systemd-fstab-generator[1381]: Ignoring "noauto" option for root device
	[  +5.414223] kauditd_printk_skb: 23 callbacks suppressed
	[ +13.822229] kauditd_printk_skb: 34 callbacks suppressed
	[Jul31 19:48] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [9d642debf242fb07cc792132a55eddbb1c15f26311a8d754a5d8fe06c8b598ae] <==
	{"level":"warn","ts":"2024-07-31T19:55:14.672013Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:55:14.696019Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:55:14.705985Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:55:14.71107Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:55:14.724448Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:55:14.733982Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:55:14.740793Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:55:14.744241Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:55:14.750936Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:55:14.758909Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:55:14.76474Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:55:14.770978Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:55:14.772249Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:55:14.774593Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:55:14.777645Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:55:14.78456Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:55:14.790226Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:55:14.796035Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:55:14.799057Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:55:14.801987Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:55:14.807835Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:55:14.814379Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:55:14.821898Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:55:14.872185Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T19:55:14.873747Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fc85001aa37e7974","from":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:55:14 up 9 min,  0 users,  load average: 0.33, 0.44, 0.24
	Linux ha-235073 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ee50c4b9e239496652640464181ee3dd4a9d21d5a5123d1d17fbb7a50e29dc1a] <==
	I0731 19:54:37.005971       1 main.go:322] Node ha-235073-m04 has CIDR [10.244.3.0/24] 
	I0731 19:54:47.004296       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0731 19:54:47.004346       1 main.go:322] Node ha-235073-m04 has CIDR [10.244.3.0/24] 
	I0731 19:54:47.004540       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0731 19:54:47.004566       1 main.go:299] handling current node
	I0731 19:54:47.004588       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0731 19:54:47.004605       1 main.go:322] Node ha-235073-m02 has CIDR [10.244.1.0/24] 
	I0731 19:54:47.004679       1 main.go:295] Handling node with IPs: map[192.168.39.136:{}]
	I0731 19:54:47.004702       1 main.go:322] Node ha-235073-m03 has CIDR [10.244.2.0/24] 
	I0731 19:54:56.995813       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0731 19:54:56.995939       1 main.go:299] handling current node
	I0731 19:54:56.995977       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0731 19:54:56.995997       1 main.go:322] Node ha-235073-m02 has CIDR [10.244.1.0/24] 
	I0731 19:54:56.996208       1 main.go:295] Handling node with IPs: map[192.168.39.136:{}]
	I0731 19:54:56.996239       1 main.go:322] Node ha-235073-m03 has CIDR [10.244.2.0/24] 
	I0731 19:54:56.996319       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0731 19:54:56.996338       1 main.go:322] Node ha-235073-m04 has CIDR [10.244.3.0/24] 
	I0731 19:55:06.996579       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0731 19:55:06.996690       1 main.go:299] handling current node
	I0731 19:55:06.996707       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0731 19:55:06.996713       1 main.go:322] Node ha-235073-m02 has CIDR [10.244.1.0/24] 
	I0731 19:55:06.997284       1 main.go:295] Handling node with IPs: map[192.168.39.136:{}]
	I0731 19:55:06.997313       1 main.go:322] Node ha-235073-m03 has CIDR [10.244.2.0/24] 
	I0731 19:55:06.997625       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0731 19:55:06.997698       1 main.go:322] Node ha-235073-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [cf0877f308475d05ee771157aab5de9f3da07eec38a21c9a74d76bde2eb4de77] <==
	I0731 19:46:37.714095       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0731 19:46:37.873030       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 19:46:51.204370       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0731 19:46:51.305256       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	http2: server: error reading preface from client 192.168.39.102:59490: read tcp 192.168.39.254:8443->192.168.39.102:59490: read: connection reset by peer
	E0731 19:48:47.430717       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0731 19:48:47.430864       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0731 19:48:47.431527       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 606.959µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0731 19:48:47.432550       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0731 19:48:47.433972       1 timeout.go:142] post-timeout activity - time-elapsed: 3.403318ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0731 19:50:40.305962       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46490: use of closed network connection
	E0731 19:50:40.493191       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46494: use of closed network connection
	E0731 19:50:40.684978       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46520: use of closed network connection
	E0731 19:50:40.882497       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46548: use of closed network connection
	E0731 19:50:41.081570       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46572: use of closed network connection
	E0731 19:50:41.267294       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46596: use of closed network connection
	E0731 19:50:41.455958       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46624: use of closed network connection
	E0731 19:50:41.625324       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46638: use of closed network connection
	E0731 19:50:41.831477       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46654: use of closed network connection
	E0731 19:50:42.138349       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46682: use of closed network connection
	E0731 19:50:42.310504       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46700: use of closed network connection
	E0731 19:50:42.500321       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46714: use of closed network connection
	E0731 19:50:42.724185       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46732: use of closed network connection
	E0731 19:50:42.920677       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46738: use of closed network connection
	E0731 19:50:43.096888       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:46750: use of closed network connection
	
	
	==> kube-controller-manager [c6ae1a1aafd356067a53de9e770b37736ea4c621cb6bf29821cca1c4488aa31e] <==
	I0731 19:50:35.685216       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="117.719092ms"
	I0731 19:50:35.833018       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="147.712557ms"
	I0731 19:50:36.180784       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="347.482919ms"
	E0731 19:50:36.180839       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0731 19:50:36.180922       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.51µs"
	I0731 19:50:36.186765       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.646µs"
	I0731 19:50:36.523477       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.795µs"
	I0731 19:50:36.809473       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.869µs"
	I0731 19:50:36.821770       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.655µs"
	I0731 19:50:36.830199       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.082µs"
	I0731 19:50:39.010400       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.03828ms"
	I0731 19:50:39.011494       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="121.234µs"
	I0731 19:50:39.619866       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.111078ms"
	I0731 19:50:39.621157       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="79.965µs"
	I0731 19:50:39.873592       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.023382ms"
	I0731 19:50:39.873727       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.946µs"
	E0731 19:51:10.864582       1 certificate_controller.go:146] Sync csr-w4j2z failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-w4j2z": the object has been modified; please apply your changes to the latest version and try again
	E0731 19:51:10.867469       1 certificate_controller.go:146] Sync csr-w4j2z failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-w4j2z": the object has been modified; please apply your changes to the latest version and try again
	I0731 19:51:11.132927       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-235073-m04\" does not exist"
	I0731 19:51:11.164551       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-235073-m04" podCIDRs=["10.244.3.0/24"]
	I0731 19:51:15.423430       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-235073-m04"
	I0731 19:51:31.835746       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-235073-m04"
	I0731 19:52:30.454615       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-235073-m04"
	I0731 19:52:30.586538       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.963057ms"
	I0731 19:52:30.586646       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.015µs"
	
	
	==> kube-proxy [8811952c6253860c880f1b6a403fc858c3e6399e99e9a825cf0b05f791ebf3ac] <==
	I0731 19:46:52.073670       1 server_linux.go:69] "Using iptables proxy"
	I0731 19:46:52.091608       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.146"]
	I0731 19:46:52.151680       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 19:46:52.151738       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 19:46:52.151756       1 server_linux.go:165] "Using iptables Proxier"
	I0731 19:46:52.154737       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 19:46:52.155285       1 server.go:872] "Version info" version="v1.30.3"
	I0731 19:46:52.155345       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 19:46:52.157051       1 config.go:192] "Starting service config controller"
	I0731 19:46:52.157340       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 19:46:52.157391       1 config.go:101] "Starting endpoint slice config controller"
	I0731 19:46:52.157396       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 19:46:52.158566       1 config.go:319] "Starting node config controller"
	I0731 19:46:52.158594       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 19:46:52.258407       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 19:46:52.258494       1 shared_informer.go:320] Caches are synced for service config
	I0731 19:46:52.258668       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [216984c6b7d59f01f104316e0505bcc5baf47b8d8b5b230f6641c7dd73533498] <==
	W0731 19:46:35.520688       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 19:46:35.520783       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 19:46:35.524897       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 19:46:35.524922       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 19:46:35.652443       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 19:46:35.652488       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 19:46:35.678400       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 19:46:35.678489       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 19:46:35.733213       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 19:46:35.733261       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 19:46:35.752795       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 19:46:35.752877       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 19:46:35.800454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 19:46:35.800545       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 19:46:35.847461       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 19:46:35.847546       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0731 19:46:36.184727       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0731 19:51:11.217044       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-2gzbj\": pod kindnet-2gzbj is already assigned to node \"ha-235073-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-2gzbj" node="ha-235073-m04"
	E0731 19:51:11.217254       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod fd812d3e-fad7-43de-bab9-896c55ee3194(kube-system/kindnet-2gzbj) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-2gzbj"
	E0731 19:51:11.217292       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-2gzbj\": pod kindnet-2gzbj is already assigned to node \"ha-235073-m04\"" pod="kube-system/kindnet-2gzbj"
	I0731 19:51:11.217317       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-2gzbj" node="ha-235073-m04"
	E0731 19:51:11.217734       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-jb89g\": pod kube-proxy-jb89g is already assigned to node \"ha-235073-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-jb89g" node="ha-235073-m04"
	E0731 19:51:11.217852       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 2bc1d841-cf7f-44ff-825f-bad1f2fd0ead(kube-system/kube-proxy-jb89g) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-jb89g"
	E0731 19:51:11.218006       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-jb89g\": pod kube-proxy-jb89g is already assigned to node \"ha-235073-m04\"" pod="kube-system/kube-proxy-jb89g"
	I0731 19:51:11.218144       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-jb89g" node="ha-235073-m04"
	
	
	==> kubelet <==
	Jul 31 19:50:37 ha-235073 kubelet[1388]: E0731 19:50:37.839833    1388 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 19:50:37 ha-235073 kubelet[1388]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 19:50:37 ha-235073 kubelet[1388]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 19:50:37 ha-235073 kubelet[1388]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 19:50:37 ha-235073 kubelet[1388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 19:51:37 ha-235073 kubelet[1388]: E0731 19:51:37.843335    1388 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 19:51:37 ha-235073 kubelet[1388]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 19:51:37 ha-235073 kubelet[1388]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 19:51:37 ha-235073 kubelet[1388]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 19:51:37 ha-235073 kubelet[1388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 19:52:37 ha-235073 kubelet[1388]: E0731 19:52:37.845506    1388 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 19:52:37 ha-235073 kubelet[1388]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 19:52:37 ha-235073 kubelet[1388]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 19:52:37 ha-235073 kubelet[1388]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 19:52:37 ha-235073 kubelet[1388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 19:53:37 ha-235073 kubelet[1388]: E0731 19:53:37.842283    1388 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 19:53:37 ha-235073 kubelet[1388]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 19:53:37 ha-235073 kubelet[1388]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 19:53:37 ha-235073 kubelet[1388]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 19:53:37 ha-235073 kubelet[1388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 19:54:37 ha-235073 kubelet[1388]: E0731 19:54:37.844358    1388 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 19:54:37 ha-235073 kubelet[1388]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 19:54:37 ha-235073 kubelet[1388]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 19:54:37 ha-235073 kubelet[1388]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 19:54:37 ha-235073 kubelet[1388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-235073 -n ha-235073
helpers_test.go:261: (dbg) Run:  kubectl --context ha-235073 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (58.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (397.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-235073 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-235073 -v=7 --alsologtostderr
E0731 19:55:37.509045  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-235073 -v=7 --alsologtostderr: exit status 82 (2m1.840035663s)

                                                
                                                
-- stdout --
	* Stopping node "ha-235073-m04"  ...
	* Stopping node "ha-235073-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 19:55:16.284112  145946 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:55:16.284221  145946 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:55:16.284229  145946 out.go:304] Setting ErrFile to fd 2...
	I0731 19:55:16.284233  145946 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:55:16.284481  145946 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 19:55:16.284738  145946 out.go:298] Setting JSON to false
	I0731 19:55:16.284828  145946 mustload.go:65] Loading cluster: ha-235073
	I0731 19:55:16.285195  145946 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:55:16.285280  145946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/config.json ...
	I0731 19:55:16.285486  145946 mustload.go:65] Loading cluster: ha-235073
	I0731 19:55:16.285625  145946 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:55:16.285654  145946 stop.go:39] StopHost: ha-235073-m04
	I0731 19:55:16.286020  145946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:55:16.286071  145946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:55:16.301095  145946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33495
	I0731 19:55:16.301579  145946 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:55:16.302219  145946 main.go:141] libmachine: Using API Version  1
	I0731 19:55:16.302243  145946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:55:16.302590  145946 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:55:16.305180  145946 out.go:177] * Stopping node "ha-235073-m04"  ...
	I0731 19:55:16.306525  145946 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0731 19:55:16.306569  145946 main.go:141] libmachine: (ha-235073-m04) Calling .DriverName
	I0731 19:55:16.306782  145946 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0731 19:55:16.306802  145946 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHHostname
	I0731 19:55:16.309610  145946 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:55:16.310012  145946 main.go:141] libmachine: (ha-235073-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:7d:83", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:50:58 +0000 UTC Type:0 Mac:52:54:00:cc:7d:83 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-235073-m04 Clientid:01:52:54:00:cc:7d:83}
	I0731 19:55:16.310040  145946 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined IP address 192.168.39.62 and MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 19:55:16.310160  145946 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHPort
	I0731 19:55:16.310336  145946 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHKeyPath
	I0731 19:55:16.310530  145946 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHUsername
	I0731 19:55:16.310681  145946 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m04/id_rsa Username:docker}
	I0731 19:55:16.397927  145946 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0731 19:55:16.451979  145946 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0731 19:55:16.505769  145946 main.go:141] libmachine: Stopping "ha-235073-m04"...
	I0731 19:55:16.505795  145946 main.go:141] libmachine: (ha-235073-m04) Calling .GetState
	I0731 19:55:16.507374  145946 main.go:141] libmachine: (ha-235073-m04) Calling .Stop
	I0731 19:55:16.510569  145946 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 0/120
	I0731 19:55:17.651104  145946 main.go:141] libmachine: (ha-235073-m04) Calling .GetState
	I0731 19:55:17.652432  145946 main.go:141] libmachine: Machine "ha-235073-m04" was stopped.
	I0731 19:55:17.652452  145946 stop.go:75] duration metric: took 1.345928585s to stop
	I0731 19:55:17.652491  145946 stop.go:39] StopHost: ha-235073-m03
	I0731 19:55:17.652780  145946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:55:17.652818  145946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:55:17.667807  145946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40173
	I0731 19:55:17.668262  145946 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:55:17.668742  145946 main.go:141] libmachine: Using API Version  1
	I0731 19:55:17.668763  145946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:55:17.669097  145946 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:55:17.672100  145946 out.go:177] * Stopping node "ha-235073-m03"  ...
	I0731 19:55:17.673574  145946 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0731 19:55:17.673606  145946 main.go:141] libmachine: (ha-235073-m03) Calling .DriverName
	I0731 19:55:17.673830  145946 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0731 19:55:17.673853  145946 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHHostname
	I0731 19:55:17.676619  145946 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:55:17.676943  145946 main.go:141] libmachine: (ha-235073-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:fb:8e", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:49:27 +0000 UTC Type:0 Mac:52:54:00:6d:fb:8e Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:ha-235073-m03 Clientid:01:52:54:00:6d:fb:8e}
	I0731 19:55:17.676968  145946 main.go:141] libmachine: (ha-235073-m03) DBG | domain ha-235073-m03 has defined IP address 192.168.39.136 and MAC address 52:54:00:6d:fb:8e in network mk-ha-235073
	I0731 19:55:17.677166  145946 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHPort
	I0731 19:55:17.677311  145946 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHKeyPath
	I0731 19:55:17.677469  145946 main.go:141] libmachine: (ha-235073-m03) Calling .GetSSHUsername
	I0731 19:55:17.677624  145946 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m03/id_rsa Username:docker}
	I0731 19:55:17.770619  145946 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0731 19:55:17.827735  145946 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0731 19:55:17.884797  145946 main.go:141] libmachine: Stopping "ha-235073-m03"...
	I0731 19:55:17.884819  145946 main.go:141] libmachine: (ha-235073-m03) Calling .GetState
	I0731 19:55:17.886365  145946 main.go:141] libmachine: (ha-235073-m03) Calling .Stop
	I0731 19:55:17.889776  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 0/120
	I0731 19:55:18.891222  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 1/120
	I0731 19:55:19.892573  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 2/120
	I0731 19:55:20.894192  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 3/120
	I0731 19:55:21.895574  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 4/120
	I0731 19:55:22.897648  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 5/120
	I0731 19:55:23.900275  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 6/120
	I0731 19:55:24.901703  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 7/120
	I0731 19:55:25.903117  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 8/120
	I0731 19:55:26.904789  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 9/120
	I0731 19:55:27.907143  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 10/120
	I0731 19:55:28.908940  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 11/120
	I0731 19:55:29.910597  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 12/120
	I0731 19:55:30.912167  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 13/120
	I0731 19:55:31.913840  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 14/120
	I0731 19:55:32.915203  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 15/120
	I0731 19:55:33.917070  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 16/120
	I0731 19:55:34.918506  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 17/120
	I0731 19:55:35.920063  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 18/120
	I0731 19:55:36.922210  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 19/120
	I0731 19:55:37.923980  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 20/120
	I0731 19:55:38.926265  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 21/120
	I0731 19:55:39.928051  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 22/120
	I0731 19:55:40.929482  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 23/120
	I0731 19:55:41.930811  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 24/120
	I0731 19:55:42.932641  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 25/120
	I0731 19:55:43.934229  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 26/120
	I0731 19:55:44.935631  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 27/120
	I0731 19:55:45.937065  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 28/120
	I0731 19:55:46.938565  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 29/120
	I0731 19:55:47.940939  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 30/120
	I0731 19:55:48.942311  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 31/120
	I0731 19:55:49.943802  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 32/120
	I0731 19:55:50.945065  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 33/120
	I0731 19:55:51.946400  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 34/120
	I0731 19:55:52.948059  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 35/120
	I0731 19:55:53.949399  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 36/120
	I0731 19:55:54.950776  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 37/120
	I0731 19:55:55.952260  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 38/120
	I0731 19:55:56.953525  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 39/120
	I0731 19:55:57.955203  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 40/120
	I0731 19:55:58.956500  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 41/120
	I0731 19:55:59.957865  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 42/120
	I0731 19:56:00.959189  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 43/120
	I0731 19:56:01.960441  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 44/120
	I0731 19:56:02.962152  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 45/120
	I0731 19:56:03.963604  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 46/120
	I0731 19:56:04.964692  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 47/120
	I0731 19:56:05.965965  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 48/120
	I0731 19:56:06.967187  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 49/120
	I0731 19:56:07.968872  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 50/120
	I0731 19:56:08.970260  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 51/120
	I0731 19:56:09.971430  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 52/120
	I0731 19:56:10.972778  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 53/120
	I0731 19:56:11.974208  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 54/120
	I0731 19:56:12.976293  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 55/120
	I0731 19:56:13.977539  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 56/120
	I0731 19:56:14.978868  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 57/120
	I0731 19:56:15.979997  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 58/120
	I0731 19:56:16.981103  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 59/120
	I0731 19:56:17.983330  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 60/120
	I0731 19:56:18.985216  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 61/120
	I0731 19:56:19.987059  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 62/120
	I0731 19:56:20.988206  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 63/120
	I0731 19:56:21.989699  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 64/120
	I0731 19:56:22.991622  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 65/120
	I0731 19:56:23.993059  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 66/120
	I0731 19:56:24.995339  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 67/120
	I0731 19:56:25.996742  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 68/120
	I0731 19:56:26.998345  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 69/120
	I0731 19:56:28.000174  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 70/120
	I0731 19:56:29.001577  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 71/120
	I0731 19:56:30.002976  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 72/120
	I0731 19:56:31.004187  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 73/120
	I0731 19:56:32.005635  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 74/120
	I0731 19:56:33.007198  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 75/120
	I0731 19:56:34.008546  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 76/120
	I0731 19:56:35.009848  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 77/120
	I0731 19:56:36.012063  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 78/120
	I0731 19:56:37.013195  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 79/120
	I0731 19:56:38.014489  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 80/120
	I0731 19:56:39.015700  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 81/120
	I0731 19:56:40.016817  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 82/120
	I0731 19:56:41.018185  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 83/120
	I0731 19:56:42.019287  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 84/120
	I0731 19:56:43.021230  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 85/120
	I0731 19:56:44.022463  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 86/120
	I0731 19:56:45.023692  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 87/120
	I0731 19:56:46.025081  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 88/120
	I0731 19:56:47.026211  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 89/120
	I0731 19:56:48.027775  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 90/120
	I0731 19:56:49.029229  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 91/120
	I0731 19:56:50.030539  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 92/120
	I0731 19:56:51.032008  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 93/120
	I0731 19:56:52.033309  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 94/120
	I0731 19:56:53.035086  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 95/120
	I0731 19:56:54.036809  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 96/120
	I0731 19:56:55.038171  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 97/120
	I0731 19:56:56.039788  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 98/120
	I0731 19:56:57.041111  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 99/120
	I0731 19:56:58.042981  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 100/120
	I0731 19:56:59.044284  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 101/120
	I0731 19:57:00.045713  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 102/120
	I0731 19:57:01.047272  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 103/120
	I0731 19:57:02.048792  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 104/120
	I0731 19:57:03.050187  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 105/120
	I0731 19:57:04.051507  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 106/120
	I0731 19:57:05.052814  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 107/120
	I0731 19:57:06.054730  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 108/120
	I0731 19:57:07.055997  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 109/120
	I0731 19:57:08.057941  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 110/120
	I0731 19:57:09.059273  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 111/120
	I0731 19:57:10.060754  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 112/120
	I0731 19:57:11.062234  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 113/120
	I0731 19:57:12.063573  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 114/120
	I0731 19:57:13.065316  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 115/120
	I0731 19:57:14.066576  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 116/120
	I0731 19:57:15.067939  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 117/120
	I0731 19:57:16.069183  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 118/120
	I0731 19:57:17.070457  145946 main.go:141] libmachine: (ha-235073-m03) Waiting for machine to stop 119/120
	I0731 19:57:18.071022  145946 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0731 19:57:18.071094  145946 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0731 19:57:18.073314  145946 out.go:177] 
	W0731 19:57:18.074840  145946 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0731 19:57:18.074859  145946 out.go:239] * 
	* 
	W0731 19:57:18.077301  145946 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 19:57:18.078776  145946 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-235073 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-235073 --wait=true -v=7 --alsologtostderr
E0731 19:57:34.577453  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
E0731 19:58:57.623819  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
E0731 20:00:09.825026  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-235073 --wait=true -v=7 --alsologtostderr: (4m33.029153402s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-235073
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-235073 -n ha-235073
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-235073 logs -n 25: (2.029019667s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-235073 cp ha-235073-m03:/home/docker/cp-test.txt                              | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m02:/home/docker/cp-test_ha-235073-m03_ha-235073-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n                                                                 | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n ha-235073-m02 sudo cat                                          | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | /home/docker/cp-test_ha-235073-m03_ha-235073-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-235073 cp ha-235073-m03:/home/docker/cp-test.txt                              | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m04:/home/docker/cp-test_ha-235073-m03_ha-235073-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n                                                                 | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n ha-235073-m04 sudo cat                                          | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | /home/docker/cp-test_ha-235073-m03_ha-235073-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-235073 cp testdata/cp-test.txt                                                | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n                                                                 | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-235073 cp ha-235073-m04:/home/docker/cp-test.txt                              | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3796763680/001/cp-test_ha-235073-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n                                                                 | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-235073 cp ha-235073-m04:/home/docker/cp-test.txt                              | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073:/home/docker/cp-test_ha-235073-m04_ha-235073.txt                       |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n                                                                 | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n ha-235073 sudo cat                                              | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | /home/docker/cp-test_ha-235073-m04_ha-235073.txt                                 |           |         |         |                     |                     |
	| cp      | ha-235073 cp ha-235073-m04:/home/docker/cp-test.txt                              | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m02:/home/docker/cp-test_ha-235073-m04_ha-235073-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n                                                                 | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n ha-235073-m02 sudo cat                                          | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | /home/docker/cp-test_ha-235073-m04_ha-235073-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-235073 cp ha-235073-m04:/home/docker/cp-test.txt                              | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m03:/home/docker/cp-test_ha-235073-m04_ha-235073-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n                                                                 | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n ha-235073-m03 sudo cat                                          | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | /home/docker/cp-test_ha-235073-m04_ha-235073-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-235073 node stop m02 -v=7                                                     | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-235073 node start m02 -v=7                                                    | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:54 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-235073 -v=7                                                           | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:55 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-235073 -v=7                                                                | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:55 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-235073 --wait=true -v=7                                                    | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:57 UTC | 31 Jul 24 20:01 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-235073                                                                | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 20:01 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 19:57:18
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 19:57:18.126314  146425 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:57:18.126578  146425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:57:18.126587  146425 out.go:304] Setting ErrFile to fd 2...
	I0731 19:57:18.126591  146425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:57:18.126792  146425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 19:57:18.127416  146425 out.go:298] Setting JSON to false
	I0731 19:57:18.128313  146425 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5974,"bootTime":1722449864,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 19:57:18.128373  146425 start.go:139] virtualization: kvm guest
	I0731 19:57:18.130640  146425 out.go:177] * [ha-235073] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 19:57:18.132306  146425 notify.go:220] Checking for updates...
	I0731 19:57:18.132348  146425 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 19:57:18.133853  146425 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 19:57:18.135421  146425 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 19:57:18.136790  146425 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 19:57:18.138038  146425 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 19:57:18.139283  146425 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 19:57:18.140839  146425 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:57:18.140959  146425 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 19:57:18.141421  146425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:57:18.141502  146425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:57:18.156558  146425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44935
	I0731 19:57:18.157040  146425 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:57:18.157665  146425 main.go:141] libmachine: Using API Version  1
	I0731 19:57:18.157688  146425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:57:18.158069  146425 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:57:18.158239  146425 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:57:18.191407  146425 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 19:57:18.192837  146425 start.go:297] selected driver: kvm2
	I0731 19:57:18.192854  146425 start.go:901] validating driver "kvm2" against &{Name:ha-235073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-235073 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.136 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.62 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:57:18.192997  146425 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 19:57:18.193360  146425 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:57:18.193435  146425 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-121704/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 19:57:18.207551  146425 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 19:57:18.208316  146425 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 19:57:18.208352  146425 cni.go:84] Creating CNI manager for ""
	I0731 19:57:18.208359  146425 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0731 19:57:18.208426  146425 start.go:340] cluster config:
	{Name:ha-235073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-235073 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.136 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.62 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:57:18.208544  146425 iso.go:125] acquiring lock: {Name:mk69fc0fe37180b19ce91307b5e12ab2d0bd69fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:57:18.210414  146425 out.go:177] * Starting "ha-235073" primary control-plane node in "ha-235073" cluster
	I0731 19:57:18.211712  146425 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 19:57:18.211748  146425 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 19:57:18.211757  146425 cache.go:56] Caching tarball of preloaded images
	I0731 19:57:18.211844  146425 preload.go:172] Found /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 19:57:18.211856  146425 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 19:57:18.211965  146425 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/config.json ...
	I0731 19:57:18.212162  146425 start.go:360] acquireMachinesLock for ha-235073: {Name:mk0ee20c9dba367bb5e62f6affdfe6f589095d2a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 19:57:18.212203  146425 start.go:364] duration metric: took 24.6µs to acquireMachinesLock for "ha-235073"
	I0731 19:57:18.212217  146425 start.go:96] Skipping create...Using existing machine configuration
	I0731 19:57:18.212225  146425 fix.go:54] fixHost starting: 
	I0731 19:57:18.212510  146425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:57:18.212542  146425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:57:18.226281  146425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41823
	I0731 19:57:18.226750  146425 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:57:18.227203  146425 main.go:141] libmachine: Using API Version  1
	I0731 19:57:18.227220  146425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:57:18.227597  146425 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:57:18.227772  146425 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:57:18.227937  146425 main.go:141] libmachine: (ha-235073) Calling .GetState
	I0731 19:57:18.229194  146425 fix.go:112] recreateIfNeeded on ha-235073: state=Running err=<nil>
	W0731 19:57:18.229208  146425 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 19:57:18.230989  146425 out.go:177] * Updating the running kvm2 "ha-235073" VM ...
	I0731 19:57:18.232248  146425 machine.go:94] provisionDockerMachine start ...
	I0731 19:57:18.232263  146425 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:57:18.232499  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:57:18.234930  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:57:18.235356  146425 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:57:18.235412  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:57:18.235563  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:57:18.235748  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:57:18.235926  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:57:18.236096  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:57:18.236240  146425 main.go:141] libmachine: Using SSH client type: native
	I0731 19:57:18.236417  146425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0731 19:57:18.236429  146425 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 19:57:18.338585  146425 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-235073
	
	I0731 19:57:18.338616  146425 main.go:141] libmachine: (ha-235073) Calling .GetMachineName
	I0731 19:57:18.338888  146425 buildroot.go:166] provisioning hostname "ha-235073"
	I0731 19:57:18.338917  146425 main.go:141] libmachine: (ha-235073) Calling .GetMachineName
	I0731 19:57:18.339100  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:57:18.341400  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:57:18.341778  146425 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:57:18.341808  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:57:18.341946  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:57:18.342145  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:57:18.342306  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:57:18.342456  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:57:18.342628  146425 main.go:141] libmachine: Using SSH client type: native
	I0731 19:57:18.342813  146425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0731 19:57:18.342825  146425 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-235073 && echo "ha-235073" | sudo tee /etc/hostname
	I0731 19:57:18.456809  146425 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-235073
	
	I0731 19:57:18.456836  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:57:18.459556  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:57:18.459948  146425 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:57:18.459982  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:57:18.460185  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:57:18.460399  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:57:18.460556  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:57:18.460689  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:57:18.460829  146425 main.go:141] libmachine: Using SSH client type: native
	I0731 19:57:18.461014  146425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0731 19:57:18.461036  146425 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-235073' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-235073/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-235073' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 19:57:18.566301  146425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 19:57:18.566338  146425 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 19:57:18.566421  146425 buildroot.go:174] setting up certificates
	I0731 19:57:18.566435  146425 provision.go:84] configureAuth start
	I0731 19:57:18.566454  146425 main.go:141] libmachine: (ha-235073) Calling .GetMachineName
	I0731 19:57:18.566776  146425 main.go:141] libmachine: (ha-235073) Calling .GetIP
	I0731 19:57:18.569304  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:57:18.569698  146425 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:57:18.569739  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:57:18.569834  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:57:18.571970  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:57:18.572311  146425 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:57:18.572340  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:57:18.572468  146425 provision.go:143] copyHostCerts
	I0731 19:57:18.572508  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 19:57:18.572551  146425 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 19:57:18.572564  146425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 19:57:18.572644  146425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 19:57:18.572755  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 19:57:18.572781  146425 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 19:57:18.572786  146425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 19:57:18.572820  146425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 19:57:18.572954  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 19:57:18.572981  146425 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 19:57:18.572990  146425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 19:57:18.573029  146425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 19:57:18.573111  146425 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.ha-235073 san=[127.0.0.1 192.168.39.146 ha-235073 localhost minikube]
	I0731 19:57:18.818409  146425 provision.go:177] copyRemoteCerts
	I0731 19:57:18.818478  146425 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 19:57:18.818527  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:57:18.821064  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:57:18.821493  146425 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:57:18.821522  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:57:18.821700  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:57:18.821893  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:57:18.822055  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:57:18.822162  146425 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:57:18.900229  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 19:57:18.900307  146425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 19:57:18.924721  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 19:57:18.924794  146425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0731 19:57:18.948208  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 19:57:18.948287  146425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 19:57:18.971950  146425 provision.go:87] duration metric: took 405.496261ms to configureAuth
	I0731 19:57:18.971983  146425 buildroot.go:189] setting minikube options for container-runtime
	I0731 19:57:18.972184  146425 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:57:18.972252  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:57:18.974968  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:57:18.975326  146425 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:57:18.975354  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:57:18.975530  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:57:18.975742  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:57:18.975903  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:57:18.976060  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:57:18.976253  146425 main.go:141] libmachine: Using SSH client type: native
	I0731 19:57:18.976458  146425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0731 19:57:18.976475  146425 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 19:58:49.906740  146425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 19:58:49.906771  146425 machine.go:97] duration metric: took 1m31.674510536s to provisionDockerMachine
	I0731 19:58:49.906784  146425 start.go:293] postStartSetup for "ha-235073" (driver="kvm2")
	I0731 19:58:49.906796  146425 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 19:58:49.906829  146425 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:58:49.907140  146425 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 19:58:49.907165  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:58:49.910091  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:58:49.910503  146425 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:58:49.910527  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:58:49.910719  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:58:49.910918  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:58:49.911097  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:58:49.911243  146425 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:58:49.993402  146425 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 19:58:49.997622  146425 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 19:58:49.997648  146425 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 19:58:49.997719  146425 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 19:58:49.997795  146425 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 19:58:49.997807  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> /etc/ssl/certs/1288912.pem
	I0731 19:58:49.997919  146425 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 19:58:50.007626  146425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 19:58:50.032273  146425 start.go:296] duration metric: took 125.474871ms for postStartSetup
	I0731 19:58:50.032312  146425 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:58:50.032585  146425 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0731 19:58:50.032608  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:58:50.035057  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:58:50.035444  146425 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:58:50.035474  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:58:50.035639  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:58:50.035817  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:58:50.035973  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:58:50.036113  146425 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	W0731 19:58:50.116220  146425 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0731 19:58:50.116248  146425 fix.go:56] duration metric: took 1m31.904023426s for fixHost
	I0731 19:58:50.116270  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:58:50.118815  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:58:50.119321  146425 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:58:50.119351  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:58:50.119552  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:58:50.119744  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:58:50.119905  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:58:50.120062  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:58:50.120234  146425 main.go:141] libmachine: Using SSH client type: native
	I0731 19:58:50.120434  146425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0731 19:58:50.120449  146425 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 19:58:50.218209  146425 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722455930.163022549
	
	I0731 19:58:50.218229  146425 fix.go:216] guest clock: 1722455930.163022549
	I0731 19:58:50.218239  146425 fix.go:229] Guest: 2024-07-31 19:58:50.163022549 +0000 UTC Remote: 2024-07-31 19:58:50.116256006 +0000 UTC m=+92.026454219 (delta=46.766543ms)
	I0731 19:58:50.218264  146425 fix.go:200] guest clock delta is within tolerance: 46.766543ms
	I0731 19:58:50.218272  146425 start.go:83] releasing machines lock for "ha-235073", held for 1m32.006059256s
	I0731 19:58:50.218296  146425 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:58:50.218570  146425 main.go:141] libmachine: (ha-235073) Calling .GetIP
	I0731 19:58:50.221278  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:58:50.221654  146425 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:58:50.221672  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:58:50.221827  146425 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:58:50.222297  146425 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:58:50.222462  146425 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:58:50.222538  146425 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 19:58:50.222588  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:58:50.222632  146425 ssh_runner.go:195] Run: cat /version.json
	I0731 19:58:50.222651  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:58:50.225215  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:58:50.225371  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:58:50.225590  146425 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:58:50.225614  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:58:50.225689  146425 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:58:50.225709  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:58:50.225750  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:58:50.225877  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:58:50.225959  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:58:50.226024  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:58:50.226123  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:58:50.226184  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:58:50.226301  146425 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:58:50.226358  146425 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:58:50.299001  146425 ssh_runner.go:195] Run: systemctl --version
	I0731 19:58:50.322573  146425 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 19:58:50.480720  146425 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 19:58:50.488117  146425 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 19:58:50.488180  146425 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 19:58:50.497571  146425 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0731 19:58:50.497592  146425 start.go:495] detecting cgroup driver to use...
	I0731 19:58:50.497656  146425 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 19:58:50.513412  146425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 19:58:50.527207  146425 docker.go:217] disabling cri-docker service (if available) ...
	I0731 19:58:50.527276  146425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 19:58:50.541500  146425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 19:58:50.554909  146425 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 19:58:50.708744  146425 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 19:58:50.852347  146425 docker.go:233] disabling docker service ...
	I0731 19:58:50.852439  146425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 19:58:50.869186  146425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 19:58:50.884046  146425 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 19:58:51.028216  146425 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 19:58:51.172713  146425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 19:58:51.186354  146425 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 19:58:51.205923  146425 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 19:58:51.205993  146425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:58:51.216143  146425 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 19:58:51.216214  146425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:58:51.226402  146425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:58:51.237655  146425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:58:51.248392  146425 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 19:58:51.258883  146425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:58:51.268989  146425 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:58:51.280655  146425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:58:51.290990  146425 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 19:58:51.300490  146425 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 19:58:51.309736  146425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:58:51.453094  146425 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 19:58:58.988616  146425 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.535487725s)
	I0731 19:58:58.988642  146425 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 19:58:58.988688  146425 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 19:58:58.993969  146425 start.go:563] Will wait 60s for crictl version
	I0731 19:58:58.994027  146425 ssh_runner.go:195] Run: which crictl
	I0731 19:58:58.998184  146425 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 19:58:59.035495  146425 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 19:58:59.035577  146425 ssh_runner.go:195] Run: crio --version
	I0731 19:58:59.064772  146425 ssh_runner.go:195] Run: crio --version
	I0731 19:58:59.097863  146425 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 19:58:59.099450  146425 main.go:141] libmachine: (ha-235073) Calling .GetIP
	I0731 19:58:59.102204  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:58:59.102611  146425 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:58:59.102638  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:58:59.102863  146425 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 19:58:59.107827  146425 kubeadm.go:883] updating cluster {Name:ha-235073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-235073 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.136 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.62 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 19:58:59.107951  146425 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 19:58:59.107991  146425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 19:58:59.153073  146425 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 19:58:59.153094  146425 crio.go:433] Images already preloaded, skipping extraction
	I0731 19:58:59.153141  146425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 19:58:59.187839  146425 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 19:58:59.187864  146425 cache_images.go:84] Images are preloaded, skipping loading
	I0731 19:58:59.187873  146425 kubeadm.go:934] updating node { 192.168.39.146 8443 v1.30.3 crio true true} ...
	I0731 19:58:59.187969  146425 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-235073 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-235073 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 19:58:59.188050  146425 ssh_runner.go:195] Run: crio config
	I0731 19:58:59.244157  146425 cni.go:84] Creating CNI manager for ""
	I0731 19:58:59.244175  146425 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0731 19:58:59.244185  146425 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 19:58:59.244207  146425 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.146 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-235073 NodeName:ha-235073 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.146"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.146 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 19:58:59.244329  146425 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.146
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-235073"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.146
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.146"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 19:58:59.244351  146425 kube-vip.go:115] generating kube-vip config ...
	I0731 19:58:59.244391  146425 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 19:58:59.255931  146425 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 19:58:59.256024  146425 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 19:58:59.256076  146425 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 19:58:59.265207  146425 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 19:58:59.265273  146425 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0731 19:58:59.274872  146425 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0731 19:58:59.291698  146425 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 19:58:59.308502  146425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0731 19:58:59.324621  146425 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0731 19:58:59.340786  146425 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 19:58:59.345966  146425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:58:59.504089  146425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 19:58:59.519033  146425 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073 for IP: 192.168.39.146
	I0731 19:58:59.519060  146425 certs.go:194] generating shared ca certs ...
	I0731 19:58:59.519082  146425 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:58:59.519288  146425 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 19:58:59.519333  146425 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 19:58:59.519344  146425 certs.go:256] generating profile certs ...
	I0731 19:58:59.519424  146425 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/client.key
	I0731 19:58:59.519451  146425 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key.f0fda48b
	I0731 19:58:59.519470  146425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt.f0fda48b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.146 192.168.39.102 192.168.39.136 192.168.39.254]
	I0731 19:58:59.732199  146425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt.f0fda48b ...
	I0731 19:58:59.732230  146425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt.f0fda48b: {Name:mk0d0eff6286966b5094c7180b8ed30b860af134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:58:59.732415  146425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key.f0fda48b ...
	I0731 19:58:59.732428  146425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key.f0fda48b: {Name:mkddf010c68b82230fff7a059326ba0136a59a1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:58:59.732506  146425 certs.go:381] copying /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt.f0fda48b -> /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt
	I0731 19:58:59.732647  146425 certs.go:385] copying /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key.f0fda48b -> /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key
	I0731 19:58:59.732774  146425 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.key
	I0731 19:58:59.732791  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 19:58:59.732803  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 19:58:59.732817  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 19:58:59.732829  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 19:58:59.732841  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 19:58:59.732853  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 19:58:59.732863  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 19:58:59.732873  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 19:58:59.732934  146425 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 19:58:59.732962  146425 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 19:58:59.732971  146425 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 19:58:59.732993  146425 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 19:58:59.733014  146425 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 19:58:59.733035  146425 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 19:58:59.733071  146425 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 19:58:59.733098  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> /usr/share/ca-certificates/1288912.pem
	I0731 19:58:59.733112  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:58:59.733124  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem -> /usr/share/ca-certificates/128891.pem
	I0731 19:58:59.733667  146425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 19:58:59.759184  146425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 19:58:59.782868  146425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 19:58:59.806827  146425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 19:58:59.830076  146425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0731 19:58:59.854461  146425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 19:58:59.877953  146425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 19:58:59.901800  146425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 19:58:59.925119  146425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 19:58:59.947869  146425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 19:58:59.971595  146425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 19:58:59.994473  146425 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 19:59:00.011519  146425 ssh_runner.go:195] Run: openssl version
	I0731 19:59:00.017395  146425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 19:59:00.028661  146425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 19:59:00.033047  146425 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 19:59:00.033093  146425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 19:59:00.038794  146425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 19:59:00.048532  146425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 19:59:00.059359  146425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 19:59:00.064174  146425 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 19:59:00.064232  146425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 19:59:00.070223  146425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 19:59:00.079819  146425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 19:59:00.090323  146425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:59:00.094497  146425 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:59:00.094563  146425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:59:00.100068  146425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 19:59:00.109810  146425 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 19:59:00.114269  146425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 19:59:00.119830  146425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 19:59:00.125412  146425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 19:59:00.131074  146425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 19:59:00.137161  146425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 19:59:00.142918  146425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 19:59:00.148449  146425 kubeadm.go:392] StartCluster: {Name:ha-235073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-235073 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.136 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.62 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:59:00.148605  146425 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 19:59:00.148685  146425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 19:59:00.189822  146425 cri.go:89] found id: "84ebfd404aca6326bf68b0b8238e99a5ec5adb72818319637ec19cfcbe8631e4"
	I0731 19:59:00.189846  146425 cri.go:89] found id: "aee7190231c2884f881211e16e64da0273c102ce1b3256ddedf8a18954fcdcb2"
	I0731 19:59:00.189851  146425 cri.go:89] found id: "54f9febcea6106d9cd695ee7e37e0333d85f3158a67944dcf43a24aaab1a3672"
	I0731 19:59:00.189854  146425 cri.go:89] found id: "3881bd1062c2997bb583fb122a03ed65b220c1c102b0d2ec1599b5be1d9f6e81"
	I0731 19:59:00.189857  146425 cri.go:89] found id: "a9ddbd3f3cc5f71da35c6165c799a35b8d72e224de7a0c4687a173f4de880f22"
	I0731 19:59:00.189860  146425 cri.go:89] found id: "30540ee956135e961a2eeabdc4f234f18455f75bb21b66afeef232ca2805dd90"
	I0731 19:59:00.189863  146425 cri.go:89] found id: "ee50c4b9e239496652640464181ee3dd4a9d21d5a5123d1d17fbb7a50e29dc1a"
	I0731 19:59:00.189865  146425 cri.go:89] found id: "8811952c6253860c880f1b6a403fc858c3e6399e99e9a825cf0b05f791ebf3ac"
	I0731 19:59:00.189868  146425 cri.go:89] found id: "c31d2ba10cadb13f4b888c49e2a6934e94344684dfc2adf6833c2d1dc0993929"
	I0731 19:59:00.189873  146425 cri.go:89] found id: "9d642debf242fb07cc792132a55eddbb1c15f26311a8d754a5d8fe06c8b598ae"
	I0731 19:59:00.189875  146425 cri.go:89] found id: "216984c6b7d59f01f104316e0505bcc5baf47b8d8b5b230f6641c7dd73533498"
	I0731 19:59:00.189878  146425 cri.go:89] found id: "cf0877f308475d05ee771157aab5de9f3da07eec38a21c9a74d76bde2eb4de77"
	I0731 19:59:00.189881  146425 cri.go:89] found id: "c6ae1a1aafd356067a53de9e770b37736ea4c621cb6bf29821cca1c4488aa31e"
	I0731 19:59:00.189883  146425 cri.go:89] found id: ""
	I0731 19:59:00.189924  146425 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 31 20:01:51 ha-235073 crio[3874]: time="2024-07-31 20:01:51.965393173Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722456111965257336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bbfaabf2-10d2-4d79-8a47-663f22524efa name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:01:51 ha-235073 crio[3874]: time="2024-07-31 20:01:51.966006780Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2cf14e08-520f-4a6a-ab0a-bc81aa98a920 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:01:51 ha-235073 crio[3874]: time="2024-07-31 20:01:51.966078920Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2cf14e08-520f-4a6a-ab0a-bc81aa98a920 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:01:51 ha-235073 crio[3874]: time="2024-07-31 20:01:51.966615404Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6fc6ee68a8ccfb01d95ed85dec112703b54962234bae1d676aa89616fd0d648,PodSandboxId:7a15c9a6957d27279b13841546d71646bf6377b358918e753a298bc3c210ac04,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722456046839314283,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cd9bb70-badc-4b4b-a135-62644edac7dd,},Annotations:map[string]string{io.kubernetes.container.hash: b5b39576,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7502ddf2c06deb62269b97a51c20850ac0228229029f4bf9f8ef9523e50ec52,PodSandboxId:faf476c4b677c80e51a2caa13ffeddd2527ae685f9dd4f8f3a69a86375ef3751,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722455985843331084,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a308afa2b137aad975d5e22dcabd17,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d1ba8b7107cb1bb158c11842ebcf14895a00b9078c118782deb224da5f52857,PodSandboxId:12599677e4703009288f3e1ebb26cef5d2d92ff75dc4d34f0862b423231967e5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722455979135587699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-g9vds,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1b34d06-e944-4236-afe0-1ee06ba4e666,},Annotations:map[string]string{io.kubernetes.container.hash: 212b9e5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60d3b03e3fca1412fdfe4a1336d714af079600794b5d69b97e45212778ac386,PodSandboxId:ae02c905beb3110e13beec060353e8a54bbbbb5fbd4dc4698dd906387257b502,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722455978455073735,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f5277261cc3e0ac6eb43af812478f1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c44f6367a3b0ea167ac08a9af6d2d1fa3d461c8d9327717846fb62a5557e9c2c,PodSandboxId:58c637ddc0deb5375375c3cebc48c63bec8c194a4b291d4a0efb90bceefc1b88,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722455960808358994,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 357a848381e2b4246b93417e0d0fd8a7,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:791841acf25442adb2adc892c7cd5548bb63d8bfccaaebf860d004aee02b6080,PodSandboxId:7a15c9a6957d27279b13841546d71646bf6377b358918e753a298bc3c210ac04,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722455946133407561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cd9bb70-badc-4b4b-a135-62644edac7dd,},Annotations:map[string]string{io.kubernetes.container.hash: b5b39576,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7e5af865d5da458640b3360a4da109eb53e95c35c3b5a12f9446af71c28680c,PodSandboxId:71e733d0386996f8415fcd8f9dca7d182b370c12a5d83983e5aa863ef3a11e3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722455945896815817,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-td8j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b836edfa-4df1-40e4-a58a-3f23afd5b78b,},Annotations:map[string]string{io.kubernetes.container.hash: 33a934d8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:9c2b76f0b85b953ff01f0545cceb7e2fb48507448aed0678ac4371e65cd98c56,PodSandboxId:0264a243f9156fcf1716437242b46a73547a11738e3bddc34a598244d83b6db4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722455945975413462,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2w7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47597b4-a38b-438c-9c3b-8f7f45130f75,},Annotations:map[string]string{io.kubernetes.container.hash: b29b8cc6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5156fa7e1ef427ac5b1607e7451d7295b3ae7c49569d43a25303797272b761c9,PodSandboxId:4660c263a94b86c191ea6d914602653108603338f4ca0526650406de37c88ddd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722455945930399014,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f7dzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9549b5d7-bb23-4934-883b-dd07f8d864d8,},Annotations:map[string]string{io.kubernetes.container.hash: b9bec004,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:097aa4401bf259c32e8722fb7124782087d94b805245dee0e8d2760aec8daf4d,PodSandboxId:fe0a920941145e7dd18da34c0b434129669a34948d10f6f7e3e3e0b1465c05ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722455945760657460,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-235073,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b69e7963c2d0df8833554e4876687c49,},Annotations:map[string]string{io.kubernetes.container.hash: d16169dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af170fcfb9dc8411f1d8fbed048ee4eb4418d5442c02d777d6b8f4e7be30867,PodSandboxId:25c8bdc2c5ffd9917b05ef670d88081b4ac4474ccc2b30d2a90b38c56bb204a7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722455945806972829,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6mpsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1910
ba34-e9d4-4fb4-9f2b-6b00dad3a3ef,},Annotations:map[string]string{io.kubernetes.container.hash: 390f5316,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5519732046627c4a96cdc2e2575d18c859b61afc81a835def1808fcdfb47a5,PodSandboxId:ae02c905beb3110e13beec060353e8a54bbbbb5fbd4dc4698dd906387257b502,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722455945827966617,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 16f5277261cc3e0ac6eb43af812478f1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af36b29bca740794d7c0b4e50678dfd727788c6c5af5ef49b306441037b9027c,PodSandboxId:faf476c4b677c80e51a2caa13ffeddd2527ae685f9dd4f8f3a69a86375ef3751,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722455945730424491,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a308afa2b137aad
975d5e22dcabd17,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:787234b628452985dd01b9eeae1a07be3f75c788f421c79acb1dc55a4f0cb1bd,PodSandboxId:64211b1205b16f2e0a1cf98f66401dd8bce4ccefa11fcaf473420341b6277383,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722455945528476575,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adddf646550f2cb39fef0b7f6c02c656,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36d67125ccdbad5f98a9142c81bc6585651ec4059eed554dfbe1f5cb5be99c60,PodSandboxId:6c4d1efc4989e0b2aa28c3cbda2d3f5d4b5e9252f3d7895297f54307dc7ab9f6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722455438711854968,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-g9vds,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1b34d06-e944-4236-afe0-1ee06ba4e666,},Annot
ations:map[string]string{io.kubernetes.container.hash: 212b9e5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ddbd3f3cc5f71da35c6165c799a35b8d72e224de7a0c4687a173f4de880f22,PodSandboxId:55ec4971c2e64ac2d6f9784d622423e7577ab445afbc4f722a284ada62c68dc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722455228102941927,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2w7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47597b4-a38b-438c-9c3b-8f7f45130f75,},Annotations:map[string]string{io.kube
rnetes.container.hash: b29b8cc6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30540ee956135e961a2eeabdc4f234f18455f75bb21b66afeef232ca2805dd90,PodSandboxId:231aebfc0631bd83287e33a69afbca01d1373f4b939d8bf65f4ac794aaf52012,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722455228031302004,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f7dzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9549b5d7-bb23-4934-883b-dd07f8d864d8,},Annotations:map[string]string{io.kubernetes.container.hash: b9bec004,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee50c4b9e239496652640464181ee3dd4a9d21d5a5123d1d17fbb7a50e29dc1a,PodSandboxId:feeccc2a1a3e7406eaea5ea90a171a8661853a60ee426f70434f26aac0f00112,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722455215945193946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6mpsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1910ba34-e9d4-4fb4-9f2b-6b00dad3a3ef,},Annotations:map[string]string{io.kubernetes.container.hash: 390f5316,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8811952c6253860c880f1b6a403fc858c3e6399e99e9a825cf0b05f791ebf3ac,PodSandboxId:dbf6b114c5cb5c19782cdc6e76a5915ae8e26cd82af58e80687fa0f5d3e199fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722455211859741609,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-td8j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b836edfa-4df1-40e4-a58a-3f23afd5b78b,},Annotations:map[string]string{io.kubernetes.container.hash: 33a934d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d642debf242fb07cc792132a55eddbb1c15f26311a8d754a5d8fe06c8b598ae,PodSandboxId:58bfb1289eb0474e39bf94aa0b78076816c68c357787822692fe3617a82226cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722455191498044732,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b69e7963c2d0df8833554e4876687c49,},Annotations:map[string]string{io.kubernetes.container.hash: d16169dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216984c6b7d59f01f104316e0505bcc5baf47b8d8b5b230f6641c7dd73533498,PodSandboxId:c9f1bb2690babd5d10c24de1404c57fafca2597594caac9dd0c659d72a8b552d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722455191481588976,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adddf646550f2cb39fef0b7f6c02c656,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2cf14e08-520f-4a6a-ab0a-bc81aa98a920 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:01:52 ha-235073 crio[3874]: time="2024-07-31 20:01:52.014914303Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3e5b012e-72aa-4d15-8754-f5c29b848a55 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:01:52 ha-235073 crio[3874]: time="2024-07-31 20:01:52.015142699Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3e5b012e-72aa-4d15-8754-f5c29b848a55 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:01:52 ha-235073 crio[3874]: time="2024-07-31 20:01:52.016378717Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d04c672c-c50b-4a2e-971e-04fc746a4abb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:01:52 ha-235073 crio[3874]: time="2024-07-31 20:01:52.017424793Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722456112017356537,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d04c672c-c50b-4a2e-971e-04fc746a4abb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:01:52 ha-235073 crio[3874]: time="2024-07-31 20:01:52.019214001Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d8cb5288-7829-4bd5-9f49-387566956bfd name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:01:52 ha-235073 crio[3874]: time="2024-07-31 20:01:52.019292993Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d8cb5288-7829-4bd5-9f49-387566956bfd name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:01:52 ha-235073 crio[3874]: time="2024-07-31 20:01:52.020193749Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6fc6ee68a8ccfb01d95ed85dec112703b54962234bae1d676aa89616fd0d648,PodSandboxId:7a15c9a6957d27279b13841546d71646bf6377b358918e753a298bc3c210ac04,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722456046839314283,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cd9bb70-badc-4b4b-a135-62644edac7dd,},Annotations:map[string]string{io.kubernetes.container.hash: b5b39576,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7502ddf2c06deb62269b97a51c20850ac0228229029f4bf9f8ef9523e50ec52,PodSandboxId:faf476c4b677c80e51a2caa13ffeddd2527ae685f9dd4f8f3a69a86375ef3751,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722455985843331084,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a308afa2b137aad975d5e22dcabd17,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d1ba8b7107cb1bb158c11842ebcf14895a00b9078c118782deb224da5f52857,PodSandboxId:12599677e4703009288f3e1ebb26cef5d2d92ff75dc4d34f0862b423231967e5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722455979135587699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-g9vds,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1b34d06-e944-4236-afe0-1ee06ba4e666,},Annotations:map[string]string{io.kubernetes.container.hash: 212b9e5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60d3b03e3fca1412fdfe4a1336d714af079600794b5d69b97e45212778ac386,PodSandboxId:ae02c905beb3110e13beec060353e8a54bbbbb5fbd4dc4698dd906387257b502,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722455978455073735,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f5277261cc3e0ac6eb43af812478f1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c44f6367a3b0ea167ac08a9af6d2d1fa3d461c8d9327717846fb62a5557e9c2c,PodSandboxId:58c637ddc0deb5375375c3cebc48c63bec8c194a4b291d4a0efb90bceefc1b88,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722455960808358994,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 357a848381e2b4246b93417e0d0fd8a7,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:791841acf25442adb2adc892c7cd5548bb63d8bfccaaebf860d004aee02b6080,PodSandboxId:7a15c9a6957d27279b13841546d71646bf6377b358918e753a298bc3c210ac04,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722455946133407561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cd9bb70-badc-4b4b-a135-62644edac7dd,},Annotations:map[string]string{io.kubernetes.container.hash: b5b39576,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7e5af865d5da458640b3360a4da109eb53e95c35c3b5a12f9446af71c28680c,PodSandboxId:71e733d0386996f8415fcd8f9dca7d182b370c12a5d83983e5aa863ef3a11e3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722455945896815817,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-td8j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b836edfa-4df1-40e4-a58a-3f23afd5b78b,},Annotations:map[string]string{io.kubernetes.container.hash: 33a934d8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:9c2b76f0b85b953ff01f0545cceb7e2fb48507448aed0678ac4371e65cd98c56,PodSandboxId:0264a243f9156fcf1716437242b46a73547a11738e3bddc34a598244d83b6db4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722455945975413462,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2w7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47597b4-a38b-438c-9c3b-8f7f45130f75,},Annotations:map[string]string{io.kubernetes.container.hash: b29b8cc6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5156fa7e1ef427ac5b1607e7451d7295b3ae7c49569d43a25303797272b761c9,PodSandboxId:4660c263a94b86c191ea6d914602653108603338f4ca0526650406de37c88ddd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722455945930399014,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f7dzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9549b5d7-bb23-4934-883b-dd07f8d864d8,},Annotations:map[string]string{io.kubernetes.container.hash: b9bec004,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:097aa4401bf259c32e8722fb7124782087d94b805245dee0e8d2760aec8daf4d,PodSandboxId:fe0a920941145e7dd18da34c0b434129669a34948d10f6f7e3e3e0b1465c05ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722455945760657460,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-235073,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b69e7963c2d0df8833554e4876687c49,},Annotations:map[string]string{io.kubernetes.container.hash: d16169dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af170fcfb9dc8411f1d8fbed048ee4eb4418d5442c02d777d6b8f4e7be30867,PodSandboxId:25c8bdc2c5ffd9917b05ef670d88081b4ac4474ccc2b30d2a90b38c56bb204a7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722455945806972829,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6mpsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1910
ba34-e9d4-4fb4-9f2b-6b00dad3a3ef,},Annotations:map[string]string{io.kubernetes.container.hash: 390f5316,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5519732046627c4a96cdc2e2575d18c859b61afc81a835def1808fcdfb47a5,PodSandboxId:ae02c905beb3110e13beec060353e8a54bbbbb5fbd4dc4698dd906387257b502,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722455945827966617,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 16f5277261cc3e0ac6eb43af812478f1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af36b29bca740794d7c0b4e50678dfd727788c6c5af5ef49b306441037b9027c,PodSandboxId:faf476c4b677c80e51a2caa13ffeddd2527ae685f9dd4f8f3a69a86375ef3751,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722455945730424491,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a308afa2b137aad
975d5e22dcabd17,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:787234b628452985dd01b9eeae1a07be3f75c788f421c79acb1dc55a4f0cb1bd,PodSandboxId:64211b1205b16f2e0a1cf98f66401dd8bce4ccefa11fcaf473420341b6277383,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722455945528476575,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adddf646550f2cb39fef0b7f6c02c656,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36d67125ccdbad5f98a9142c81bc6585651ec4059eed554dfbe1f5cb5be99c60,PodSandboxId:6c4d1efc4989e0b2aa28c3cbda2d3f5d4b5e9252f3d7895297f54307dc7ab9f6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722455438711854968,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-g9vds,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1b34d06-e944-4236-afe0-1ee06ba4e666,},Annot
ations:map[string]string{io.kubernetes.container.hash: 212b9e5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ddbd3f3cc5f71da35c6165c799a35b8d72e224de7a0c4687a173f4de880f22,PodSandboxId:55ec4971c2e64ac2d6f9784d622423e7577ab445afbc4f722a284ada62c68dc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722455228102941927,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2w7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47597b4-a38b-438c-9c3b-8f7f45130f75,},Annotations:map[string]string{io.kube
rnetes.container.hash: b29b8cc6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30540ee956135e961a2eeabdc4f234f18455f75bb21b66afeef232ca2805dd90,PodSandboxId:231aebfc0631bd83287e33a69afbca01d1373f4b939d8bf65f4ac794aaf52012,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722455228031302004,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f7dzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9549b5d7-bb23-4934-883b-dd07f8d864d8,},Annotations:map[string]string{io.kubernetes.container.hash: b9bec004,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee50c4b9e239496652640464181ee3dd4a9d21d5a5123d1d17fbb7a50e29dc1a,PodSandboxId:feeccc2a1a3e7406eaea5ea90a171a8661853a60ee426f70434f26aac0f00112,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722455215945193946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6mpsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1910ba34-e9d4-4fb4-9f2b-6b00dad3a3ef,},Annotations:map[string]string{io.kubernetes.container.hash: 390f5316,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8811952c6253860c880f1b6a403fc858c3e6399e99e9a825cf0b05f791ebf3ac,PodSandboxId:dbf6b114c5cb5c19782cdc6e76a5915ae8e26cd82af58e80687fa0f5d3e199fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722455211859741609,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-td8j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b836edfa-4df1-40e4-a58a-3f23afd5b78b,},Annotations:map[string]string{io.kubernetes.container.hash: 33a934d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d642debf242fb07cc792132a55eddbb1c15f26311a8d754a5d8fe06c8b598ae,PodSandboxId:58bfb1289eb0474e39bf94aa0b78076816c68c357787822692fe3617a82226cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722455191498044732,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b69e7963c2d0df8833554e4876687c49,},Annotations:map[string]string{io.kubernetes.container.hash: d16169dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216984c6b7d59f01f104316e0505bcc5baf47b8d8b5b230f6641c7dd73533498,PodSandboxId:c9f1bb2690babd5d10c24de1404c57fafca2597594caac9dd0c659d72a8b552d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722455191481588976,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adddf646550f2cb39fef0b7f6c02c656,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d8cb5288-7829-4bd5-9f49-387566956bfd name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:01:52 ha-235073 crio[3874]: time="2024-07-31 20:01:52.064717598Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b38ccceb-cf89-40d2-834b-e31d1cd99f84 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:01:52 ha-235073 crio[3874]: time="2024-07-31 20:01:52.065015213Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b38ccceb-cf89-40d2-834b-e31d1cd99f84 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:01:52 ha-235073 crio[3874]: time="2024-07-31 20:01:52.066456647Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b631827a-2b33-474a-b74d-6e4239a041ed name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:01:52 ha-235073 crio[3874]: time="2024-07-31 20:01:52.067061916Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722456112067039276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b631827a-2b33-474a-b74d-6e4239a041ed name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:01:52 ha-235073 crio[3874]: time="2024-07-31 20:01:52.067804523Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c288c1a1-7aa1-429c-b05e-08fdc1292e9a name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:01:52 ha-235073 crio[3874]: time="2024-07-31 20:01:52.067859115Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c288c1a1-7aa1-429c-b05e-08fdc1292e9a name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:01:52 ha-235073 crio[3874]: time="2024-07-31 20:01:52.068313653Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6fc6ee68a8ccfb01d95ed85dec112703b54962234bae1d676aa89616fd0d648,PodSandboxId:7a15c9a6957d27279b13841546d71646bf6377b358918e753a298bc3c210ac04,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722456046839314283,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cd9bb70-badc-4b4b-a135-62644edac7dd,},Annotations:map[string]string{io.kubernetes.container.hash: b5b39576,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7502ddf2c06deb62269b97a51c20850ac0228229029f4bf9f8ef9523e50ec52,PodSandboxId:faf476c4b677c80e51a2caa13ffeddd2527ae685f9dd4f8f3a69a86375ef3751,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722455985843331084,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a308afa2b137aad975d5e22dcabd17,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d1ba8b7107cb1bb158c11842ebcf14895a00b9078c118782deb224da5f52857,PodSandboxId:12599677e4703009288f3e1ebb26cef5d2d92ff75dc4d34f0862b423231967e5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722455979135587699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-g9vds,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1b34d06-e944-4236-afe0-1ee06ba4e666,},Annotations:map[string]string{io.kubernetes.container.hash: 212b9e5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60d3b03e3fca1412fdfe4a1336d714af079600794b5d69b97e45212778ac386,PodSandboxId:ae02c905beb3110e13beec060353e8a54bbbbb5fbd4dc4698dd906387257b502,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722455978455073735,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f5277261cc3e0ac6eb43af812478f1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c44f6367a3b0ea167ac08a9af6d2d1fa3d461c8d9327717846fb62a5557e9c2c,PodSandboxId:58c637ddc0deb5375375c3cebc48c63bec8c194a4b291d4a0efb90bceefc1b88,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722455960808358994,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 357a848381e2b4246b93417e0d0fd8a7,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:791841acf25442adb2adc892c7cd5548bb63d8bfccaaebf860d004aee02b6080,PodSandboxId:7a15c9a6957d27279b13841546d71646bf6377b358918e753a298bc3c210ac04,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722455946133407561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cd9bb70-badc-4b4b-a135-62644edac7dd,},Annotations:map[string]string{io.kubernetes.container.hash: b5b39576,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7e5af865d5da458640b3360a4da109eb53e95c35c3b5a12f9446af71c28680c,PodSandboxId:71e733d0386996f8415fcd8f9dca7d182b370c12a5d83983e5aa863ef3a11e3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722455945896815817,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-td8j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b836edfa-4df1-40e4-a58a-3f23afd5b78b,},Annotations:map[string]string{io.kubernetes.container.hash: 33a934d8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:9c2b76f0b85b953ff01f0545cceb7e2fb48507448aed0678ac4371e65cd98c56,PodSandboxId:0264a243f9156fcf1716437242b46a73547a11738e3bddc34a598244d83b6db4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722455945975413462,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2w7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47597b4-a38b-438c-9c3b-8f7f45130f75,},Annotations:map[string]string{io.kubernetes.container.hash: b29b8cc6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5156fa7e1ef427ac5b1607e7451d7295b3ae7c49569d43a25303797272b761c9,PodSandboxId:4660c263a94b86c191ea6d914602653108603338f4ca0526650406de37c88ddd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722455945930399014,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f7dzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9549b5d7-bb23-4934-883b-dd07f8d864d8,},Annotations:map[string]string{io.kubernetes.container.hash: b9bec004,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:097aa4401bf259c32e8722fb7124782087d94b805245dee0e8d2760aec8daf4d,PodSandboxId:fe0a920941145e7dd18da34c0b434129669a34948d10f6f7e3e3e0b1465c05ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722455945760657460,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-235073,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b69e7963c2d0df8833554e4876687c49,},Annotations:map[string]string{io.kubernetes.container.hash: d16169dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af170fcfb9dc8411f1d8fbed048ee4eb4418d5442c02d777d6b8f4e7be30867,PodSandboxId:25c8bdc2c5ffd9917b05ef670d88081b4ac4474ccc2b30d2a90b38c56bb204a7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722455945806972829,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6mpsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1910
ba34-e9d4-4fb4-9f2b-6b00dad3a3ef,},Annotations:map[string]string{io.kubernetes.container.hash: 390f5316,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5519732046627c4a96cdc2e2575d18c859b61afc81a835def1808fcdfb47a5,PodSandboxId:ae02c905beb3110e13beec060353e8a54bbbbb5fbd4dc4698dd906387257b502,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722455945827966617,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 16f5277261cc3e0ac6eb43af812478f1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af36b29bca740794d7c0b4e50678dfd727788c6c5af5ef49b306441037b9027c,PodSandboxId:faf476c4b677c80e51a2caa13ffeddd2527ae685f9dd4f8f3a69a86375ef3751,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722455945730424491,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a308afa2b137aad
975d5e22dcabd17,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:787234b628452985dd01b9eeae1a07be3f75c788f421c79acb1dc55a4f0cb1bd,PodSandboxId:64211b1205b16f2e0a1cf98f66401dd8bce4ccefa11fcaf473420341b6277383,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722455945528476575,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adddf646550f2cb39fef0b7f6c02c656,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36d67125ccdbad5f98a9142c81bc6585651ec4059eed554dfbe1f5cb5be99c60,PodSandboxId:6c4d1efc4989e0b2aa28c3cbda2d3f5d4b5e9252f3d7895297f54307dc7ab9f6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722455438711854968,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-g9vds,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1b34d06-e944-4236-afe0-1ee06ba4e666,},Annot
ations:map[string]string{io.kubernetes.container.hash: 212b9e5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ddbd3f3cc5f71da35c6165c799a35b8d72e224de7a0c4687a173f4de880f22,PodSandboxId:55ec4971c2e64ac2d6f9784d622423e7577ab445afbc4f722a284ada62c68dc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722455228102941927,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2w7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47597b4-a38b-438c-9c3b-8f7f45130f75,},Annotations:map[string]string{io.kube
rnetes.container.hash: b29b8cc6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30540ee956135e961a2eeabdc4f234f18455f75bb21b66afeef232ca2805dd90,PodSandboxId:231aebfc0631bd83287e33a69afbca01d1373f4b939d8bf65f4ac794aaf52012,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722455228031302004,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f7dzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9549b5d7-bb23-4934-883b-dd07f8d864d8,},Annotations:map[string]string{io.kubernetes.container.hash: b9bec004,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee50c4b9e239496652640464181ee3dd4a9d21d5a5123d1d17fbb7a50e29dc1a,PodSandboxId:feeccc2a1a3e7406eaea5ea90a171a8661853a60ee426f70434f26aac0f00112,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722455215945193946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6mpsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1910ba34-e9d4-4fb4-9f2b-6b00dad3a3ef,},Annotations:map[string]string{io.kubernetes.container.hash: 390f5316,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8811952c6253860c880f1b6a403fc858c3e6399e99e9a825cf0b05f791ebf3ac,PodSandboxId:dbf6b114c5cb5c19782cdc6e76a5915ae8e26cd82af58e80687fa0f5d3e199fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722455211859741609,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-td8j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b836edfa-4df1-40e4-a58a-3f23afd5b78b,},Annotations:map[string]string{io.kubernetes.container.hash: 33a934d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d642debf242fb07cc792132a55eddbb1c15f26311a8d754a5d8fe06c8b598ae,PodSandboxId:58bfb1289eb0474e39bf94aa0b78076816c68c357787822692fe3617a82226cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722455191498044732,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b69e7963c2d0df8833554e4876687c49,},Annotations:map[string]string{io.kubernetes.container.hash: d16169dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216984c6b7d59f01f104316e0505bcc5baf47b8d8b5b230f6641c7dd73533498,PodSandboxId:c9f1bb2690babd5d10c24de1404c57fafca2597594caac9dd0c659d72a8b552d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722455191481588976,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adddf646550f2cb39fef0b7f6c02c656,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c288c1a1-7aa1-429c-b05e-08fdc1292e9a name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:01:52 ha-235073 crio[3874]: time="2024-07-31 20:01:52.111627788Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=06435dc2-beea-4d27-bc73-f73d13abf49d name=/runtime.v1.RuntimeService/Version
	Jul 31 20:01:52 ha-235073 crio[3874]: time="2024-07-31 20:01:52.112522830Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=06435dc2-beea-4d27-bc73-f73d13abf49d name=/runtime.v1.RuntimeService/Version
	Jul 31 20:01:52 ha-235073 crio[3874]: time="2024-07-31 20:01:52.113669742Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7f555e6d-d117-4a84-8f65-fdbde801d6e2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:01:52 ha-235073 crio[3874]: time="2024-07-31 20:01:52.114216513Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722456112114191999,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7f555e6d-d117-4a84-8f65-fdbde801d6e2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:01:52 ha-235073 crio[3874]: time="2024-07-31 20:01:52.114758365Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=db231fb1-2d21-4588-b135-4c7e48dda81f name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:01:52 ha-235073 crio[3874]: time="2024-07-31 20:01:52.114814626Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=db231fb1-2d21-4588-b135-4c7e48dda81f name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:01:52 ha-235073 crio[3874]: time="2024-07-31 20:01:52.116236352Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6fc6ee68a8ccfb01d95ed85dec112703b54962234bae1d676aa89616fd0d648,PodSandboxId:7a15c9a6957d27279b13841546d71646bf6377b358918e753a298bc3c210ac04,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722456046839314283,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cd9bb70-badc-4b4b-a135-62644edac7dd,},Annotations:map[string]string{io.kubernetes.container.hash: b5b39576,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7502ddf2c06deb62269b97a51c20850ac0228229029f4bf9f8ef9523e50ec52,PodSandboxId:faf476c4b677c80e51a2caa13ffeddd2527ae685f9dd4f8f3a69a86375ef3751,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722455985843331084,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a308afa2b137aad975d5e22dcabd17,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d1ba8b7107cb1bb158c11842ebcf14895a00b9078c118782deb224da5f52857,PodSandboxId:12599677e4703009288f3e1ebb26cef5d2d92ff75dc4d34f0862b423231967e5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722455979135587699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-g9vds,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1b34d06-e944-4236-afe0-1ee06ba4e666,},Annotations:map[string]string{io.kubernetes.container.hash: 212b9e5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60d3b03e3fca1412fdfe4a1336d714af079600794b5d69b97e45212778ac386,PodSandboxId:ae02c905beb3110e13beec060353e8a54bbbbb5fbd4dc4698dd906387257b502,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722455978455073735,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f5277261cc3e0ac6eb43af812478f1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c44f6367a3b0ea167ac08a9af6d2d1fa3d461c8d9327717846fb62a5557e9c2c,PodSandboxId:58c637ddc0deb5375375c3cebc48c63bec8c194a4b291d4a0efb90bceefc1b88,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722455960808358994,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 357a848381e2b4246b93417e0d0fd8a7,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:791841acf25442adb2adc892c7cd5548bb63d8bfccaaebf860d004aee02b6080,PodSandboxId:7a15c9a6957d27279b13841546d71646bf6377b358918e753a298bc3c210ac04,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722455946133407561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cd9bb70-badc-4b4b-a135-62644edac7dd,},Annotations:map[string]string{io.kubernetes.container.hash: b5b39576,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7e5af865d5da458640b3360a4da109eb53e95c35c3b5a12f9446af71c28680c,PodSandboxId:71e733d0386996f8415fcd8f9dca7d182b370c12a5d83983e5aa863ef3a11e3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722455945896815817,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-td8j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b836edfa-4df1-40e4-a58a-3f23afd5b78b,},Annotations:map[string]string{io.kubernetes.container.hash: 33a934d8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:9c2b76f0b85b953ff01f0545cceb7e2fb48507448aed0678ac4371e65cd98c56,PodSandboxId:0264a243f9156fcf1716437242b46a73547a11738e3bddc34a598244d83b6db4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722455945975413462,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2w7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47597b4-a38b-438c-9c3b-8f7f45130f75,},Annotations:map[string]string{io.kubernetes.container.hash: b29b8cc6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5156fa7e1ef427ac5b1607e7451d7295b3ae7c49569d43a25303797272b761c9,PodSandboxId:4660c263a94b86c191ea6d914602653108603338f4ca0526650406de37c88ddd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722455945930399014,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f7dzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9549b5d7-bb23-4934-883b-dd07f8d864d8,},Annotations:map[string]string{io.kubernetes.container.hash: b9bec004,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:097aa4401bf259c32e8722fb7124782087d94b805245dee0e8d2760aec8daf4d,PodSandboxId:fe0a920941145e7dd18da34c0b434129669a34948d10f6f7e3e3e0b1465c05ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722455945760657460,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-235073,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b69e7963c2d0df8833554e4876687c49,},Annotations:map[string]string{io.kubernetes.container.hash: d16169dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af170fcfb9dc8411f1d8fbed048ee4eb4418d5442c02d777d6b8f4e7be30867,PodSandboxId:25c8bdc2c5ffd9917b05ef670d88081b4ac4474ccc2b30d2a90b38c56bb204a7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722455945806972829,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6mpsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1910
ba34-e9d4-4fb4-9f2b-6b00dad3a3ef,},Annotations:map[string]string{io.kubernetes.container.hash: 390f5316,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5519732046627c4a96cdc2e2575d18c859b61afc81a835def1808fcdfb47a5,PodSandboxId:ae02c905beb3110e13beec060353e8a54bbbbb5fbd4dc4698dd906387257b502,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722455945827966617,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 16f5277261cc3e0ac6eb43af812478f1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af36b29bca740794d7c0b4e50678dfd727788c6c5af5ef49b306441037b9027c,PodSandboxId:faf476c4b677c80e51a2caa13ffeddd2527ae685f9dd4f8f3a69a86375ef3751,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722455945730424491,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a308afa2b137aad
975d5e22dcabd17,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:787234b628452985dd01b9eeae1a07be3f75c788f421c79acb1dc55a4f0cb1bd,PodSandboxId:64211b1205b16f2e0a1cf98f66401dd8bce4ccefa11fcaf473420341b6277383,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722455945528476575,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adddf646550f2cb39fef0b7f6c02c656,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36d67125ccdbad5f98a9142c81bc6585651ec4059eed554dfbe1f5cb5be99c60,PodSandboxId:6c4d1efc4989e0b2aa28c3cbda2d3f5d4b5e9252f3d7895297f54307dc7ab9f6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722455438711854968,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-g9vds,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1b34d06-e944-4236-afe0-1ee06ba4e666,},Annot
ations:map[string]string{io.kubernetes.container.hash: 212b9e5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ddbd3f3cc5f71da35c6165c799a35b8d72e224de7a0c4687a173f4de880f22,PodSandboxId:55ec4971c2e64ac2d6f9784d622423e7577ab445afbc4f722a284ada62c68dc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722455228102941927,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2w7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47597b4-a38b-438c-9c3b-8f7f45130f75,},Annotations:map[string]string{io.kube
rnetes.container.hash: b29b8cc6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30540ee956135e961a2eeabdc4f234f18455f75bb21b66afeef232ca2805dd90,PodSandboxId:231aebfc0631bd83287e33a69afbca01d1373f4b939d8bf65f4ac794aaf52012,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722455228031302004,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f7dzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9549b5d7-bb23-4934-883b-dd07f8d864d8,},Annotations:map[string]string{io.kubernetes.container.hash: b9bec004,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee50c4b9e239496652640464181ee3dd4a9d21d5a5123d1d17fbb7a50e29dc1a,PodSandboxId:feeccc2a1a3e7406eaea5ea90a171a8661853a60ee426f70434f26aac0f00112,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722455215945193946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6mpsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1910ba34-e9d4-4fb4-9f2b-6b00dad3a3ef,},Annotations:map[string]string{io.kubernetes.container.hash: 390f5316,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8811952c6253860c880f1b6a403fc858c3e6399e99e9a825cf0b05f791ebf3ac,PodSandboxId:dbf6b114c5cb5c19782cdc6e76a5915ae8e26cd82af58e80687fa0f5d3e199fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722455211859741609,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-td8j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b836edfa-4df1-40e4-a58a-3f23afd5b78b,},Annotations:map[string]string{io.kubernetes.container.hash: 33a934d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d642debf242fb07cc792132a55eddbb1c15f26311a8d754a5d8fe06c8b598ae,PodSandboxId:58bfb1289eb0474e39bf94aa0b78076816c68c357787822692fe3617a82226cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722455191498044732,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b69e7963c2d0df8833554e4876687c49,},Annotations:map[string]string{io.kubernetes.container.hash: d16169dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216984c6b7d59f01f104316e0505bcc5baf47b8d8b5b230f6641c7dd73533498,PodSandboxId:c9f1bb2690babd5d10c24de1404c57fafca2597594caac9dd0c659d72a8b552d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722455191481588976,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adddf646550f2cb39fef0b7f6c02c656,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=db231fb1-2d21-4588-b135-4c7e48dda81f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b6fc6ee68a8cc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       5                   7a15c9a6957d2       storage-provisioner
	b7502ddf2c06d       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago        Running             kube-apiserver            3                   faf476c4b677c       kube-apiserver-ha-235073
	9d1ba8b7107cb       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   12599677e4703       busybox-fc5497c4f-g9vds
	f60d3b03e3fca       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago        Running             kube-controller-manager   2                   ae02c905beb31       kube-controller-manager-ha-235073
	c44f6367a3b0e       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   58c637ddc0deb       kube-vip-ha-235073
	791841acf2544       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       4                   7a15c9a6957d2       storage-provisioner
	9c2b76f0b85b9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   0264a243f9156       coredns-7db6d8ff4d-d2w7q
	5156fa7e1ef42       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   4660c263a94b8       coredns-7db6d8ff4d-f7dzt
	e7e5af865d5da       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      2 minutes ago        Running             kube-proxy                1                   71e733d038699       kube-proxy-td8j2
	7b55197320466       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago        Exited              kube-controller-manager   1                   ae02c905beb31       kube-controller-manager-ha-235073
	4af170fcfb9dc       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      2 minutes ago        Running             kindnet-cni               1                   25c8bdc2c5ffd       kindnet-6mpsn
	097aa4401bf25       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   fe0a920941145       etcd-ha-235073
	af36b29bca740       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago        Exited              kube-apiserver            2                   faf476c4b677c       kube-apiserver-ha-235073
	787234b628452       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      2 minutes ago        Running             kube-scheduler            1                   64211b1205b16       kube-scheduler-ha-235073
	36d67125ccdba       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   6c4d1efc4989e       busybox-fc5497c4f-g9vds
	a9ddbd3f3cc5f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   55ec4971c2e64       coredns-7db6d8ff4d-d2w7q
	30540ee956135       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   231aebfc0631b       coredns-7db6d8ff4d-f7dzt
	ee50c4b9e2394       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    14 minutes ago       Exited              kindnet-cni               0                   feeccc2a1a3e7       kindnet-6mpsn
	8811952c62538       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      15 minutes ago       Exited              kube-proxy                0                   dbf6b114c5cb5       kube-proxy-td8j2
	9d642debf242f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      15 minutes ago       Exited              etcd                      0                   58bfb1289eb04       etcd-ha-235073
	216984c6b7d59       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      15 minutes ago       Exited              kube-scheduler            0                   c9f1bb2690bab       kube-scheduler-ha-235073
	
	
	==> coredns [30540ee956135e961a2eeabdc4f234f18455f75bb21b66afeef232ca2805dd90] <==
	[INFO] 10.244.1.2:60484 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143802s
	[INFO] 10.244.0.4:58480 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129276s
	[INFO] 10.244.2.2:36458 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001308986s
	[INFO] 10.244.2.2:48644 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094253s
	[INFO] 10.244.1.2:34972 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151042s
	[INFO] 10.244.1.2:32819 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096017s
	[INFO] 10.244.1.2:48157 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075225s
	[INFO] 10.244.0.4:54613 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084738s
	[INFO] 10.244.0.4:60576 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000829s
	[INFO] 10.244.2.2:36544 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164516s
	[INFO] 10.244.2.2:45708 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142016s
	[INFO] 10.244.2.2:40736 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110815s
	[INFO] 10.244.2.2:36751 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104862s
	[INFO] 10.244.1.2:54006 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000448605s
	[INFO] 10.244.1.2:59479 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121156s
	[INFO] 10.244.0.4:33169 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000051358s
	[INFO] 10.244.2.2:44195 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135177s
	[INFO] 10.244.2.2:36586 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000153451s
	[INFO] 10.244.2.2:56302 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000124509s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: Unexpected error when reading response body: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	
	
	==> coredns [5156fa7e1ef427ac5b1607e7451d7295b3ae7c49569d43a25303797272b761c9] <==
	Trace[1923712009]: [10.001563968s] [10.001563968s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43204->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[136503993]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 19:59:17.526) (total time: 10637ms):
	Trace[136503993]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43204->10.96.0.1:443: read: connection reset by peer 10637ms (19:59:28.164)
	Trace[136503993]: [10.637912164s] [10.637912164s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43204->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [9c2b76f0b85b953ff01f0545cceb7e2fb48507448aed0678ac4371e65cd98c56] <==
	Trace[1272621397]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (19:59:25.494)
	Trace[1272621397]: [10.001001244s] [10.001001244s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1819968330]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 19:59:15.532) (total time: 10000ms):
	Trace[1819968330]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (19:59:25.533)
	Trace[1819968330]: [10.000795402s] [10.000795402s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [a9ddbd3f3cc5f71da35c6165c799a35b8d72e224de7a0c4687a173f4de880f22] <==
	[INFO] 10.244.1.2:42728 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000124693s
	[INFO] 10.244.0.4:54532 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00008837s
	[INFO] 10.244.0.4:52959 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000063732s
	[INFO] 10.244.0.4:56087 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000045645s
	[INFO] 10.244.2.2:42350 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000130124s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=2004&timeout=6m53s&timeoutSeconds=413&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1981&timeout=9m18s&timeoutSeconds=558&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1976&timeout=9m39s&timeoutSeconds=579&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1230429363]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 19:57:04.688) (total time: 12454ms):
	Trace[1230429363]: ---"Objects listed" error:Unauthorized 12454ms (19:57:17.143)
	Trace[1230429363]: [12.454777329s] [12.454777329s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[440821777]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 19:57:05.074) (total time: 12069ms):
	Trace[440821777]: ---"Objects listed" error:Unauthorized 12069ms (19:57:17.143)
	Trace[440821777]: [12.069408161s] [12.069408161s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[1045963971]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 19:57:04.934) (total time: 12209ms):
	Trace[1045963971]: ---"Objects listed" error:Unauthorized 12209ms (19:57:17.144)
	Trace[1045963971]: [12.209618911s] [12.209618911s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-235073
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-235073
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825
	                    minikube.k8s.io/name=ha-235073
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T19_46_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 19:46:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-235073
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 20:01:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 19:59:48 +0000   Wed, 31 Jul 2024 19:46:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 19:59:48 +0000   Wed, 31 Jul 2024 19:46:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 19:59:48 +0000   Wed, 31 Jul 2024 19:46:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 19:59:48 +0000   Wed, 31 Jul 2024 19:47:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.146
	  Hostname:    ha-235073
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e35869b5bfb347c6a5e12e63b257d2a1
	  System UUID:                e35869b5-bfb3-47c6-a5e1-2e63b257d2a1
	  Boot ID:                    846162a9-11ef-48d0-b284-9320ff7be7d0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-g9vds              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-d2w7q             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-f7dzt             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-ha-235073                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-6mpsn                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-235073             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-235073    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-td8j2                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-235073             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-235073                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 15m                    kube-proxy       
	  Normal   Starting                 2m4s                   kube-proxy       
	  Normal   NodeHasNoDiskPressure    15m                    kubelet          Node ha-235073 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 15m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  15m                    kubelet          Node ha-235073 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     15m                    kubelet          Node ha-235073 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m                    node-controller  Node ha-235073 event: Registered Node ha-235073 in Controller
	  Normal   NodeReady                14m                    kubelet          Node ha-235073 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-235073 event: Registered Node ha-235073 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-235073 event: Registered Node ha-235073 in Controller
	  Warning  ContainerGCFailed        3m15s (x2 over 4m15s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           115s                   node-controller  Node ha-235073 event: Registered Node ha-235073 in Controller
	  Normal   RegisteredNode           112s                   node-controller  Node ha-235073 event: Registered Node ha-235073 in Controller
	  Normal   RegisteredNode           30s                    node-controller  Node ha-235073 event: Registered Node ha-235073 in Controller
	
	
	Name:               ha-235073-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-235073-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825
	                    minikube.k8s.io/name=ha-235073
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T19_48_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 19:48:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-235073-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 20:01:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 20:00:31 +0000   Wed, 31 Jul 2024 19:59:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 20:00:31 +0000   Wed, 31 Jul 2024 19:59:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 20:00:31 +0000   Wed, 31 Jul 2024 19:59:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 20:00:31 +0000   Wed, 31 Jul 2024 19:59:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-235073-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 55b090e5d4e04e9e843bceddcf4718db
	  System UUID:                55b090e5-d4e0-4e9e-843b-ceddcf4718db
	  Boot ID:                    9e2c3933-f78d-4425-a10f-bddde5be171c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-d7lpt                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-235073-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-v5g92                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-235073-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-235073-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-4g5ws                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-235073-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-235073-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 96s                    kube-proxy       
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-235073-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-235073-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-235073-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                    node-controller  Node ha-235073-m02 event: Registered Node ha-235073-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-235073-m02 event: Registered Node ha-235073-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-235073-m02 event: Registered Node ha-235073-m02 in Controller
	  Normal  NodeNotReady             9m22s                  node-controller  Node ha-235073-m02 status is now: NodeNotReady
	  Normal  Starting                 2m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m29s (x8 over 2m29s)  kubelet          Node ha-235073-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m29s (x8 over 2m29s)  kubelet          Node ha-235073-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m29s (x7 over 2m29s)  kubelet          Node ha-235073-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           115s                   node-controller  Node ha-235073-m02 event: Registered Node ha-235073-m02 in Controller
	  Normal  RegisteredNode           112s                   node-controller  Node ha-235073-m02 event: Registered Node ha-235073-m02 in Controller
	  Normal  RegisteredNode           30s                    node-controller  Node ha-235073-m02 event: Registered Node ha-235073-m02 in Controller
	
	
	Name:               ha-235073-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-235073-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825
	                    minikube.k8s.io/name=ha-235073
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T19_50_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 19:50:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-235073-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 20:01:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 20:01:23 +0000   Wed, 31 Jul 2024 20:00:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 20:01:23 +0000   Wed, 31 Jul 2024 20:00:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 20:01:23 +0000   Wed, 31 Jul 2024 20:00:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 20:01:23 +0000   Wed, 31 Jul 2024 20:00:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.136
	  Hostname:    ha-235073-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e9779a35d74b41fd9b9796249c8a5396
	  System UUID:                e9779a35-d74b-41fd-9b97-96249c8a5396
	  Boot ID:                    8675d547-15f8-4a72-b71d-1690b3b7d284
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wqc9h                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-235073-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-964d5                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-235073-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-235073-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-mkrmt                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-235073-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-235073-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 42s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-235073-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-235073-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-235073-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-235073-m03 event: Registered Node ha-235073-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-235073-m03 event: Registered Node ha-235073-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-235073-m03 event: Registered Node ha-235073-m03 in Controller
	  Normal   RegisteredNode           114s               node-controller  Node ha-235073-m03 event: Registered Node ha-235073-m03 in Controller
	  Normal   RegisteredNode           112s               node-controller  Node ha-235073-m03 event: Registered Node ha-235073-m03 in Controller
	  Normal   NodeNotReady             74s                node-controller  Node ha-235073-m03 status is now: NodeNotReady
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  60s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  60s (x3 over 60s)  kubelet          Node ha-235073-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x3 over 60s)  kubelet          Node ha-235073-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s (x3 over 60s)  kubelet          Node ha-235073-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 60s (x2 over 60s)  kubelet          Node ha-235073-m03 has been rebooted, boot id: 8675d547-15f8-4a72-b71d-1690b3b7d284
	  Normal   NodeReady                60s (x2 over 60s)  kubelet          Node ha-235073-m03 status is now: NodeReady
	  Normal   RegisteredNode           30s                node-controller  Node ha-235073-m03 event: Registered Node ha-235073-m03 in Controller
	
	
	Name:               ha-235073-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-235073-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825
	                    minikube.k8s.io/name=ha-235073
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T19_51_11_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 19:51:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-235073-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 20:01:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 20:01:44 +0000   Wed, 31 Jul 2024 20:01:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 20:01:44 +0000   Wed, 31 Jul 2024 20:01:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 20:01:44 +0000   Wed, 31 Jul 2024 20:01:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 20:01:44 +0000   Wed, 31 Jul 2024 20:01:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.62
	  Hostname:    ha-235073-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f0f8c10839cf446c8b0628fe1b69511a
	  System UUID:                f0f8c108-39cf-446c-8b06-28fe1b69511a
	  Boot ID:                    bc3d5e39-2d7c-4054-8d3f-f9510e731678
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2gzbj       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-jb89g    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-235073-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-235073-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-235073-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-235073-m04 event: Registered Node ha-235073-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-235073-m04 event: Registered Node ha-235073-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-235073-m04 event: Registered Node ha-235073-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-235073-m04 status is now: NodeReady
	  Normal   RegisteredNode           114s               node-controller  Node ha-235073-m04 event: Registered Node ha-235073-m04 in Controller
	  Normal   RegisteredNode           112s               node-controller  Node ha-235073-m04 event: Registered Node ha-235073-m04 in Controller
	  Normal   NodeNotReady             74s                node-controller  Node ha-235073-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           30s                node-controller  Node ha-235073-m04 event: Registered Node ha-235073-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)    kubelet          Node ha-235073-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)    kubelet          Node ha-235073-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)    kubelet          Node ha-235073-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s                 kubelet          Node ha-235073-m04 has been rebooted, boot id: bc3d5e39-2d7c-4054-8d3f-f9510e731678
	  Normal   NodeReady                8s                 kubelet          Node ha-235073-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.063310] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060385] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.158302] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.127644] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.264376] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.129943] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +5.303318] systemd-fstab-generator[955]: Ignoring "noauto" option for root device
	[  +0.056828] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.179861] kauditd_printk_skb: 74 callbacks suppressed
	[  +2.138103] systemd-fstab-generator[1381]: Ignoring "noauto" option for root device
	[  +5.414223] kauditd_printk_skb: 23 callbacks suppressed
	[ +13.822229] kauditd_printk_skb: 34 callbacks suppressed
	[Jul31 19:48] kauditd_printk_skb: 26 callbacks suppressed
	[Jul31 19:55] kauditd_printk_skb: 1 callbacks suppressed
	[Jul31 19:58] systemd-fstab-generator[3793]: Ignoring "noauto" option for root device
	[  +0.154362] systemd-fstab-generator[3805]: Ignoring "noauto" option for root device
	[  +0.175828] systemd-fstab-generator[3819]: Ignoring "noauto" option for root device
	[  +0.141578] systemd-fstab-generator[3831]: Ignoring "noauto" option for root device
	[  +0.283144] systemd-fstab-generator[3859]: Ignoring "noauto" option for root device
	[  +8.041173] systemd-fstab-generator[3963]: Ignoring "noauto" option for root device
	[  +0.091416] kauditd_printk_skb: 100 callbacks suppressed
	[Jul31 19:59] kauditd_printk_skb: 12 callbacks suppressed
	[ +12.504721] kauditd_printk_skb: 85 callbacks suppressed
	[ +10.073460] kauditd_printk_skb: 1 callbacks suppressed
	[ +18.050773] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [097aa4401bf259c32e8722fb7124782087d94b805245dee0e8d2760aec8daf4d] <==
	{"level":"warn","ts":"2024-07-31T20:00:47.019803Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"da763d5f6f242eda","error":"Get \"https://192.168.39.136:2380/version\": dial tcp 192.168.39.136:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T20:00:51.031973Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.136:2380/version","remote-member-id":"da763d5f6f242eda","error":"Get \"https://192.168.39.136:2380/version\": dial tcp 192.168.39.136:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T20:00:51.032246Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"da763d5f6f242eda","error":"Get \"https://192.168.39.136:2380/version\": dial tcp 192.168.39.136:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T20:00:51.878919Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"da763d5f6f242eda","rtt":"0s","error":"dial tcp 192.168.39.136:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T20:00:51.880044Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"da763d5f6f242eda","rtt":"0s","error":"dial tcp 192.168.39.136:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T20:00:55.035278Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.136:2380/version","remote-member-id":"da763d5f6f242eda","error":"Get \"https://192.168.39.136:2380/version\": dial tcp 192.168.39.136:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T20:00:55.03532Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"da763d5f6f242eda","error":"Get \"https://192.168.39.136:2380/version\": dial tcp 192.168.39.136:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T20:00:56.879978Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"da763d5f6f242eda","rtt":"0s","error":"dial tcp 192.168.39.136:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T20:00:56.881189Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"da763d5f6f242eda","rtt":"0s","error":"dial tcp 192.168.39.136:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-31T20:00:57.806077Z","caller":"traceutil/trace.go:171","msg":"trace[171863725] transaction","detail":"{read_only:false; response_revision:2522; number_of_response:1; }","duration":"124.229568ms","start":"2024-07-31T20:00:57.681816Z","end":"2024-07-31T20:00:57.806046Z","steps":["trace[171863725] 'process raft request'  (duration: 124.073125ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T20:00:57.978598Z","caller":"traceutil/trace.go:171","msg":"trace[1877555136] transaction","detail":"{read_only:false; response_revision:2523; number_of_response:1; }","duration":"143.848436ms","start":"2024-07-31T20:00:57.834733Z","end":"2024-07-31T20:00:57.978582Z","steps":["trace[1877555136] 'process raft request'  (duration: 141.596781ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T20:00:59.037647Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.136:2380/version","remote-member-id":"da763d5f6f242eda","error":"Get \"https://192.168.39.136:2380/version\": dial tcp 192.168.39.136:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T20:00:59.037719Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"da763d5f6f242eda","error":"Get \"https://192.168.39.136:2380/version\": dial tcp 192.168.39.136:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T20:01:01.880607Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"da763d5f6f242eda","rtt":"0s","error":"dial tcp 192.168.39.136:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T20:01:01.881816Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"da763d5f6f242eda","rtt":"0s","error":"dial tcp 192.168.39.136:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T20:01:03.040224Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.136:2380/version","remote-member-id":"da763d5f6f242eda","error":"Get \"https://192.168.39.136:2380/version\": dial tcp 192.168.39.136:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T20:01:03.040305Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"da763d5f6f242eda","error":"Get \"https://192.168.39.136:2380/version\": dial tcp 192.168.39.136:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-31T20:01:04.640549Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"da763d5f6f242eda"}
	{"level":"info","ts":"2024-07-31T20:01:04.655852Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"fc85001aa37e7974","to":"da763d5f6f242eda","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-31T20:01:04.655918Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"fc85001aa37e7974","remote-peer-id":"da763d5f6f242eda"}
	{"level":"info","ts":"2024-07-31T20:01:04.663436Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"fc85001aa37e7974","to":"da763d5f6f242eda","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-31T20:01:04.663491Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"fc85001aa37e7974","remote-peer-id":"da763d5f6f242eda"}
	{"level":"info","ts":"2024-07-31T20:01:04.68604Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"fc85001aa37e7974","remote-peer-id":"da763d5f6f242eda"}
	{"level":"info","ts":"2024-07-31T20:01:04.686281Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fc85001aa37e7974","remote-peer-id":"da763d5f6f242eda"}
	{"level":"warn","ts":"2024-07-31T20:01:04.692349Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.136:45284","server-name":"","error":"EOF"}
	
	
	==> etcd [9d642debf242fb07cc792132a55eddbb1c15f26311a8d754a5d8fe06c8b598ae] <==
	2024/07/31 19:57:19 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/31 19:57:19 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/31 19:57:19 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/31 19:57:19 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-31T19:57:19.142176Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8751779449440129714,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-07-31T19:57:19.222068Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.146:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T19:57:19.222168Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.146:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-31T19:57:19.222388Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"fc85001aa37e7974","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-31T19:57:19.222542Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"575e9b91f63fd0d3"}
	{"level":"info","ts":"2024-07-31T19:57:19.222578Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"575e9b91f63fd0d3"}
	{"level":"info","ts":"2024-07-31T19:57:19.222601Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"575e9b91f63fd0d3"}
	{"level":"info","ts":"2024-07-31T19:57:19.222677Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3"}
	{"level":"info","ts":"2024-07-31T19:57:19.222798Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3"}
	{"level":"info","ts":"2024-07-31T19:57:19.222856Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3"}
	{"level":"info","ts":"2024-07-31T19:57:19.222867Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"575e9b91f63fd0d3"}
	{"level":"info","ts":"2024-07-31T19:57:19.222873Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"da763d5f6f242eda"}
	{"level":"info","ts":"2024-07-31T19:57:19.222881Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"da763d5f6f242eda"}
	{"level":"info","ts":"2024-07-31T19:57:19.222904Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"da763d5f6f242eda"}
	{"level":"info","ts":"2024-07-31T19:57:19.222942Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fc85001aa37e7974","remote-peer-id":"da763d5f6f242eda"}
	{"level":"info","ts":"2024-07-31T19:57:19.222994Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fc85001aa37e7974","remote-peer-id":"da763d5f6f242eda"}
	{"level":"info","ts":"2024-07-31T19:57:19.223042Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fc85001aa37e7974","remote-peer-id":"da763d5f6f242eda"}
	{"level":"info","ts":"2024-07-31T19:57:19.223053Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"da763d5f6f242eda"}
	{"level":"info","ts":"2024-07-31T19:57:19.22568Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.146:2380"}
	{"level":"info","ts":"2024-07-31T19:57:19.225918Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.146:2380"}
	{"level":"info","ts":"2024-07-31T19:57:19.22597Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-235073","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.146:2380"],"advertise-client-urls":["https://192.168.39.146:2379"]}
	
	
	==> kernel <==
	 20:01:52 up 15 min,  0 users,  load average: 0.50, 0.56, 0.39
	Linux ha-235073 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4af170fcfb9dc8411f1d8fbed048ee4eb4418d5442c02d777d6b8f4e7be30867] <==
	I0731 20:01:16.996327       1 main.go:322] Node ha-235073-m04 has CIDR [10.244.3.0/24] 
	I0731 20:01:26.988611       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0731 20:01:26.988851       1 main.go:299] handling current node
	I0731 20:01:26.988945       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0731 20:01:26.988996       1 main.go:322] Node ha-235073-m02 has CIDR [10.244.1.0/24] 
	I0731 20:01:26.989288       1 main.go:295] Handling node with IPs: map[192.168.39.136:{}]
	I0731 20:01:26.989335       1 main.go:322] Node ha-235073-m03 has CIDR [10.244.2.0/24] 
	I0731 20:01:26.989428       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0731 20:01:26.989447       1 main.go:322] Node ha-235073-m04 has CIDR [10.244.3.0/24] 
	I0731 20:01:36.986302       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0731 20:01:36.986553       1 main.go:299] handling current node
	I0731 20:01:36.986612       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0731 20:01:36.986650       1 main.go:322] Node ha-235073-m02 has CIDR [10.244.1.0/24] 
	I0731 20:01:36.986885       1 main.go:295] Handling node with IPs: map[192.168.39.136:{}]
	I0731 20:01:36.986950       1 main.go:322] Node ha-235073-m03 has CIDR [10.244.2.0/24] 
	I0731 20:01:36.987073       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0731 20:01:36.987201       1 main.go:322] Node ha-235073-m04 has CIDR [10.244.3.0/24] 
	I0731 20:01:46.986269       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0731 20:01:46.986326       1 main.go:322] Node ha-235073-m02 has CIDR [10.244.1.0/24] 
	I0731 20:01:46.986477       1 main.go:295] Handling node with IPs: map[192.168.39.136:{}]
	I0731 20:01:46.986501       1 main.go:322] Node ha-235073-m03 has CIDR [10.244.2.0/24] 
	I0731 20:01:46.986551       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0731 20:01:46.986570       1 main.go:322] Node ha-235073-m04 has CIDR [10.244.3.0/24] 
	I0731 20:01:46.986650       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0731 20:01:46.986659       1 main.go:299] handling current node
	
	
	==> kindnet [ee50c4b9e239496652640464181ee3dd4a9d21d5a5123d1d17fbb7a50e29dc1a] <==
	I0731 19:56:56.995972       1 main.go:299] handling current node
	I0731 19:56:56.995996       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0731 19:56:56.996015       1 main.go:322] Node ha-235073-m02 has CIDR [10.244.1.0/24] 
	E0731 19:57:03.652887       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1953&timeout=9m12s&timeoutSeconds=552&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""
	I0731 19:57:06.995587       1 main.go:295] Handling node with IPs: map[192.168.39.136:{}]
	I0731 19:57:06.995684       1 main.go:322] Node ha-235073-m03 has CIDR [10.244.2.0/24] 
	I0731 19:57:06.996005       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0731 19:57:06.996070       1 main.go:322] Node ha-235073-m04 has CIDR [10.244.3.0/24] 
	I0731 19:57:06.996207       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0731 19:57:06.996298       1 main.go:299] handling current node
	I0731 19:57:06.996326       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0731 19:57:06.996405       1 main.go:322] Node ha-235073-m02 has CIDR [10.244.1.0/24] 
	I0731 19:57:16.997505       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0731 19:57:16.997639       1 main.go:299] handling current node
	I0731 19:57:16.997678       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0731 19:57:16.997697       1 main.go:322] Node ha-235073-m02 has CIDR [10.244.1.0/24] 
	I0731 19:57:16.997919       1 main.go:295] Handling node with IPs: map[192.168.39.136:{}]
	I0731 19:57:16.997944       1 main.go:322] Node ha-235073-m03 has CIDR [10.244.2.0/24] 
	I0731 19:57:16.998073       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0731 19:57:16.998093       1 main.go:322] Node ha-235073-m04 has CIDR [10.244.3.0/24] 
	W0731 19:57:17.140171       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Unauthorized
	I0731 19:57:17.142624       1 trace.go:236] Trace[402570704]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232 (31-Jul-2024 19:57:04.686) (total time: 12454ms):
	Trace[402570704]: ---"Objects listed" error:Unauthorized 12453ms (19:57:17.140)
	Trace[402570704]: [12.454266018s] [12.454266018s] END
	E0731 19:57:17.143324       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Unauthorized
	
	
	==> kube-apiserver [af36b29bca740794d7c0b4e50678dfd727788c6c5af5ef49b306441037b9027c] <==
	I0731 19:59:06.416170       1 options.go:221] external host was not specified, using 192.168.39.146
	I0731 19:59:06.417136       1 server.go:148] Version: v1.30.3
	I0731 19:59:06.417484       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 19:59:06.978032       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0731 19:59:06.981207       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 19:59:06.982029       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0731 19:59:06.982163       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0731 19:59:06.983462       1 instance.go:299] Using reconciler: lease
	W0731 19:59:26.971960       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0731 19:59:26.971959       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0731 19:59:26.985169       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [b7502ddf2c06deb62269b97a51c20850ac0228229029f4bf9f8ef9523e50ec52] <==
	I0731 19:59:47.581652       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0731 19:59:47.582269       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0731 19:59:47.682691       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 19:59:47.688817       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0731 19:59:47.712204       1 shared_informer.go:320] Caches are synced for configmaps
	I0731 19:59:47.712313       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0731 19:59:47.712319       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0731 19:59:47.712428       1 aggregator.go:165] initial CRD sync complete...
	I0731 19:59:47.712488       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 19:59:47.712331       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0731 19:59:47.712546       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0731 19:59:47.712282       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 19:59:47.712517       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 19:59:47.713152       1 cache.go:39] Caches are synced for autoregister controller
	I0731 19:59:47.742508       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 19:59:47.752649       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 19:59:47.752700       1 policy_source.go:224] refreshing policies
	W0731 19:59:47.775522       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.136]
	I0731 19:59:47.777306       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 19:59:47.787388       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 19:59:47.788552       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0731 19:59:47.796628       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0731 19:59:48.601467       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0731 19:59:49.016260       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.136 192.168.39.146]
	W0731 19:59:59.018199       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.146]
	
	
	==> kube-controller-manager [7b5519732046627c4a96cdc2e2575d18c859b61afc81a835def1808fcdfb47a5] <==
	I0731 19:59:06.698061       1 serving.go:380] Generated self-signed cert in-memory
	I0731 19:59:07.179958       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0731 19:59:07.180079       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 19:59:07.184815       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0731 19:59:07.185245       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0731 19:59:07.186030       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 19:59:07.186225       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0731 19:59:27.991868       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.146:8443/healthz\": dial tcp 192.168.39.146:8443: connect: connection refused"
	
	
	==> kube-controller-manager [f60d3b03e3fca1412fdfe4a1336d714af079600794b5d69b97e45212778ac386] <==
	I0731 20:00:00.344268       1 shared_informer.go:320] Caches are synced for PVC protection
	I0731 20:00:00.345231       1 shared_informer.go:320] Caches are synced for attach detach
	I0731 20:00:00.350687       1 shared_informer.go:320] Caches are synced for persistent volume
	I0731 20:00:00.362810       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 20:00:00.374598       1 shared_informer.go:320] Caches are synced for expand
	I0731 20:00:00.392909       1 shared_informer.go:320] Caches are synced for stateful set
	I0731 20:00:00.785944       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 20:00:00.790498       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 20:00:00.790597       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0731 20:00:07.874620       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-89w97 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-89w97\": the object has been modified; please apply your changes to the latest version and try again"
	I0731 20:00:07.874923       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"fb26398a-22ed-45f7-92f7-fdd35d48f44a", APIVersion:"v1", ResourceVersion:"233", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-89w97 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-89w97": the object has been modified; please apply your changes to the latest version and try again
	I0731 20:00:07.892654       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="81.112026ms"
	I0731 20:00:07.892926       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="138.853µs"
	I0731 20:00:17.861716       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-89w97 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-89w97\": the object has been modified; please apply your changes to the latest version and try again"
	I0731 20:00:17.862428       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"fb26398a-22ed-45f7-92f7-fdd35d48f44a", APIVersion:"v1", ResourceVersion:"233", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-89w97 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-89w97": the object has been modified; please apply your changes to the latest version and try again
	I0731 20:00:17.906050       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="64.603199ms"
	I0731 20:00:17.906330       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="118.812µs"
	I0731 20:00:24.838074       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.039046ms"
	I0731 20:00:24.839388       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="101.043µs"
	I0731 20:00:38.226977       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.694839ms"
	I0731 20:00:38.227063       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.414µs"
	I0731 20:00:53.454672       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.926µs"
	I0731 20:01:13.093535       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.720832ms"
	I0731 20:01:13.093684       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.817µs"
	I0731 20:01:44.123951       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-235073-m04"
	
	
	==> kube-proxy [8811952c6253860c880f1b6a403fc858c3e6399e99e9a825cf0b05f791ebf3ac] <==
	E0731 19:56:10.212599       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-235073&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 19:56:13.286345       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1973": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 19:56:13.286411       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1973": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 19:56:16.357062       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1999": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 19:56:16.358533       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1999": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 19:56:16.358373       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-235073&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 19:56:16.358623       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-235073&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 19:56:19.429934       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1973": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 19:56:19.430064       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1973": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 19:56:28.645824       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-235073&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 19:56:28.645951       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-235073&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 19:56:31.716794       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1999": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 19:56:31.716900       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1999": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 19:56:31.717095       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1973": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 19:56:31.717197       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1973": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 19:56:44.005525       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-235073&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 19:56:44.005722       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-235073&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 19:56:50.154358       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1999": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 19:56:50.154442       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1999": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 19:56:53.221764       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1973": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 19:56:53.221905       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1973": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 19:57:14.725300       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-235073&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 19:57:14.726196       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-235073&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 19:57:17.797495       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1999": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 19:57:17.797988       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1999": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [e7e5af865d5da458640b3360a4da109eb53e95c35c3b5a12f9446af71c28680c] <==
	I0731 19:59:07.287211       1 server_linux.go:69] "Using iptables proxy"
	E0731 19:59:08.388761       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-235073\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0731 19:59:11.460671       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-235073\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0731 19:59:14.533234       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-235073\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0731 19:59:20.677036       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-235073\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0731 19:59:29.892821       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-235073\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0731 19:59:48.324853       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-235073\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0731 19:59:48.325350       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0731 19:59:48.452248       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 19:59:48.452343       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 19:59:48.452476       1 server_linux.go:165] "Using iptables Proxier"
	I0731 19:59:48.504982       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 19:59:48.507505       1 server.go:872] "Version info" version="v1.30.3"
	I0731 19:59:48.508316       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 19:59:48.510702       1 config.go:192] "Starting service config controller"
	I0731 19:59:48.510810       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 19:59:48.510933       1 config.go:101] "Starting endpoint slice config controller"
	I0731 19:59:48.511046       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 19:59:48.513336       1 config.go:319] "Starting node config controller"
	I0731 19:59:48.513449       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 19:59:48.611616       1 shared_informer.go:320] Caches are synced for service config
	I0731 19:59:48.614936       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 19:59:48.616443       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [216984c6b7d59f01f104316e0505bcc5baf47b8d8b5b230f6641c7dd73533498] <==
	W0731 19:57:10.355237       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 19:57:10.355284       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 19:57:10.577520       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 19:57:10.577567       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 19:57:10.757472       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 19:57:10.757516       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 19:57:10.885027       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 19:57:10.885185       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 19:57:10.903286       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 19:57:10.903362       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 19:57:10.936730       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 19:57:10.936816       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 19:57:10.971930       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 19:57:10.972014       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 19:57:11.050337       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 19:57:11.050384       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 19:57:11.326070       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 19:57:11.326153       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 19:57:11.661014       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 19:57:11.661150       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 19:57:12.640552       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 19:57:12.640641       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 19:57:16.776935       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 19:57:16.777030       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 19:57:19.055823       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [787234b628452985dd01b9eeae1a07be3f75c788f421c79acb1dc55a4f0cb1bd] <==
	W0731 19:59:38.045574       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.146:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.146:8443: connect: connection refused
	E0731 19:59:38.045614       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.146:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.146:8443: connect: connection refused
	W0731 19:59:41.069823       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.146:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.146:8443: connect: connection refused
	E0731 19:59:41.069905       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.146:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.146:8443: connect: connection refused
	W0731 19:59:43.079183       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.146:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.146:8443: connect: connection refused
	E0731 19:59:43.079323       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.146:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.146:8443: connect: connection refused
	W0731 19:59:43.520770       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.146:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.146:8443: connect: connection refused
	E0731 19:59:43.520898       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.146:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.146:8443: connect: connection refused
	W0731 19:59:43.642851       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.146:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.146:8443: connect: connection refused
	E0731 19:59:43.642912       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.146:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.146:8443: connect: connection refused
	W0731 19:59:45.218953       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.146:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.146:8443: connect: connection refused
	E0731 19:59:45.219029       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.146:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.146:8443: connect: connection refused
	W0731 19:59:47.621620       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 19:59:47.621672       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 19:59:47.621798       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 19:59:47.621827       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 19:59:47.621871       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 19:59:47.621914       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 19:59:47.622150       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 19:59:47.622213       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 19:59:47.625601       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 19:59:47.625642       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 19:59:47.643740       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 19:59:47.644048       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0731 20:00:02.797582       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 19:59:48 ha-235073 kubelet[1388]: W0731 19:59:48.324561    1388 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1951": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 31 19:59:48 ha-235073 kubelet[1388]: E0731 19:59:48.325231    1388 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1951": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 31 19:59:55 ha-235073 kubelet[1388]: I0731 19:59:55.819753    1388 scope.go:117] "RemoveContainer" containerID="791841acf25442adb2adc892c7cd5548bb63d8bfccaaebf860d004aee02b6080"
	Jul 31 19:59:55 ha-235073 kubelet[1388]: E0731 19:59:55.820322    1388 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9cd9bb70-badc-4b4b-a135-62644edac7dd)\"" pod="kube-system/storage-provisioner" podUID="9cd9bb70-badc-4b4b-a135-62644edac7dd"
	Jul 31 20:00:10 ha-235073 kubelet[1388]: I0731 20:00:10.819479    1388 scope.go:117] "RemoveContainer" containerID="791841acf25442adb2adc892c7cd5548bb63d8bfccaaebf860d004aee02b6080"
	Jul 31 20:00:10 ha-235073 kubelet[1388]: E0731 20:00:10.819657    1388 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9cd9bb70-badc-4b4b-a135-62644edac7dd)\"" pod="kube-system/storage-provisioner" podUID="9cd9bb70-badc-4b4b-a135-62644edac7dd"
	Jul 31 20:00:24 ha-235073 kubelet[1388]: I0731 20:00:24.819750    1388 scope.go:117] "RemoveContainer" containerID="791841acf25442adb2adc892c7cd5548bb63d8bfccaaebf860d004aee02b6080"
	Jul 31 20:00:24 ha-235073 kubelet[1388]: E0731 20:00:24.819952    1388 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9cd9bb70-badc-4b4b-a135-62644edac7dd)\"" pod="kube-system/storage-provisioner" podUID="9cd9bb70-badc-4b4b-a135-62644edac7dd"
	Jul 31 20:00:30 ha-235073 kubelet[1388]: I0731 20:00:30.473680    1388 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-g9vds" podStartSLOduration=593.083381777 podStartE2EDuration="9m55.473583389s" podCreationTimestamp="2024-07-31 19:50:35 +0000 UTC" firstStartedPulling="2024-07-31 19:50:36.304652989 +0000 UTC m=+238.639029735" lastFinishedPulling="2024-07-31 19:50:38.694854601 +0000 UTC m=+241.029231347" observedRunningTime="2024-07-31 19:50:39.830296821 +0000 UTC m=+242.164673584" watchObservedRunningTime="2024-07-31 20:00:30.473583389 +0000 UTC m=+832.807960147"
	Jul 31 20:00:35 ha-235073 kubelet[1388]: I0731 20:00:35.819646    1388 scope.go:117] "RemoveContainer" containerID="791841acf25442adb2adc892c7cd5548bb63d8bfccaaebf860d004aee02b6080"
	Jul 31 20:00:35 ha-235073 kubelet[1388]: E0731 20:00:35.819964    1388 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9cd9bb70-badc-4b4b-a135-62644edac7dd)\"" pod="kube-system/storage-provisioner" podUID="9cd9bb70-badc-4b4b-a135-62644edac7dd"
	Jul 31 20:00:37 ha-235073 kubelet[1388]: E0731 20:00:37.844781    1388 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 20:00:37 ha-235073 kubelet[1388]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 20:00:37 ha-235073 kubelet[1388]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 20:00:37 ha-235073 kubelet[1388]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 20:00:37 ha-235073 kubelet[1388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 20:00:46 ha-235073 kubelet[1388]: I0731 20:00:46.820462    1388 scope.go:117] "RemoveContainer" containerID="791841acf25442adb2adc892c7cd5548bb63d8bfccaaebf860d004aee02b6080"
	Jul 31 20:00:52 ha-235073 kubelet[1388]: I0731 20:00:52.819881    1388 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-235073" podUID="f28e113e-7c11-4a00-a8cb-fb5527042343"
	Jul 31 20:00:52 ha-235073 kubelet[1388]: I0731 20:00:52.843806    1388 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-235073"
	Jul 31 20:00:57 ha-235073 kubelet[1388]: I0731 20:00:57.984403    1388 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-235073" podStartSLOduration=5.984380052 podStartE2EDuration="5.984380052s" podCreationTimestamp="2024-07-31 20:00:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-31 20:00:57.981749862 +0000 UTC m=+860.316126627" watchObservedRunningTime="2024-07-31 20:00:57.984380052 +0000 UTC m=+860.318756813"
	Jul 31 20:01:37 ha-235073 kubelet[1388]: E0731 20:01:37.845185    1388 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 20:01:37 ha-235073 kubelet[1388]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 20:01:37 ha-235073 kubelet[1388]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 20:01:37 ha-235073 kubelet[1388]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 20:01:37 ha-235073 kubelet[1388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 20:01:51.654833  147847 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19355-121704/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-235073 -n ha-235073
helpers_test.go:261: (dbg) Run:  kubectl --context ha-235073 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (397.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 stop -v=7 --alsologtostderr
E0731 20:02:34.578107  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-235073 stop -v=7 --alsologtostderr: exit status 82 (2m0.470430249s)

                                                
                                                
-- stdout --
	* Stopping node "ha-235073-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 20:02:11.499732  148258 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:02:11.499843  148258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:02:11.499847  148258 out.go:304] Setting ErrFile to fd 2...
	I0731 20:02:11.499851  148258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:02:11.500019  148258 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 20:02:11.500261  148258 out.go:298] Setting JSON to false
	I0731 20:02:11.500339  148258 mustload.go:65] Loading cluster: ha-235073
	I0731 20:02:11.500709  148258 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:02:11.500797  148258 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/config.json ...
	I0731 20:02:11.500977  148258 mustload.go:65] Loading cluster: ha-235073
	I0731 20:02:11.501100  148258 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:02:11.501137  148258 stop.go:39] StopHost: ha-235073-m04
	I0731 20:02:11.501560  148258 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:02:11.501612  148258 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:02:11.516609  148258 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46093
	I0731 20:02:11.517120  148258 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:02:11.517737  148258 main.go:141] libmachine: Using API Version  1
	I0731 20:02:11.517758  148258 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:02:11.518189  148258 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:02:11.521399  148258 out.go:177] * Stopping node "ha-235073-m04"  ...
	I0731 20:02:11.522871  148258 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0731 20:02:11.522937  148258 main.go:141] libmachine: (ha-235073-m04) Calling .DriverName
	I0731 20:02:11.523217  148258 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0731 20:02:11.523247  148258 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHHostname
	I0731 20:02:11.526254  148258 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 20:02:11.526761  148258 main.go:141] libmachine: (ha-235073-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:7d:83", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 21:01:39 +0000 UTC Type:0 Mac:52:54:00:cc:7d:83 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-235073-m04 Clientid:01:52:54:00:cc:7d:83}
	I0731 20:02:11.526802  148258 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined IP address 192.168.39.62 and MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 20:02:11.526965  148258 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHPort
	I0731 20:02:11.527140  148258 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHKeyPath
	I0731 20:02:11.527320  148258 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHUsername
	I0731 20:02:11.527497  148258 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m04/id_rsa Username:docker}
	I0731 20:02:11.608354  148258 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0731 20:02:11.661731  148258 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0731 20:02:11.714988  148258 main.go:141] libmachine: Stopping "ha-235073-m04"...
	I0731 20:02:11.715034  148258 main.go:141] libmachine: (ha-235073-m04) Calling .GetState
	I0731 20:02:11.716963  148258 main.go:141] libmachine: (ha-235073-m04) Calling .Stop
	I0731 20:02:11.720360  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 0/120
	I0731 20:02:12.721934  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 1/120
	I0731 20:02:13.723752  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 2/120
	I0731 20:02:14.725643  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 3/120
	I0731 20:02:15.727028  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 4/120
	I0731 20:02:16.728499  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 5/120
	I0731 20:02:17.730248  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 6/120
	I0731 20:02:18.731627  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 7/120
	I0731 20:02:19.733253  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 8/120
	I0731 20:02:20.734631  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 9/120
	I0731 20:02:21.736958  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 10/120
	I0731 20:02:22.738721  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 11/120
	I0731 20:02:23.740103  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 12/120
	I0731 20:02:24.741430  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 13/120
	I0731 20:02:25.742890  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 14/120
	I0731 20:02:26.744905  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 15/120
	I0731 20:02:27.746311  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 16/120
	I0731 20:02:28.747744  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 17/120
	I0731 20:02:29.749093  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 18/120
	I0731 20:02:30.751032  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 19/120
	I0731 20:02:31.753317  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 20/120
	I0731 20:02:32.754699  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 21/120
	I0731 20:02:33.756908  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 22/120
	I0731 20:02:34.759111  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 23/120
	I0731 20:02:35.760509  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 24/120
	I0731 20:02:36.762362  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 25/120
	I0731 20:02:37.763685  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 26/120
	I0731 20:02:38.765072  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 27/120
	I0731 20:02:39.766563  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 28/120
	I0731 20:02:40.767894  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 29/120
	I0731 20:02:41.769744  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 30/120
	I0731 20:02:42.771202  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 31/120
	I0731 20:02:43.772612  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 32/120
	I0731 20:02:44.774303  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 33/120
	I0731 20:02:45.775782  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 34/120
	I0731 20:02:46.777475  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 35/120
	I0731 20:02:47.779875  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 36/120
	I0731 20:02:48.781599  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 37/120
	I0731 20:02:49.784077  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 38/120
	I0731 20:02:50.785361  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 39/120
	I0731 20:02:51.787635  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 40/120
	I0731 20:02:52.789018  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 41/120
	I0731 20:02:53.790330  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 42/120
	I0731 20:02:54.791632  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 43/120
	I0731 20:02:55.792867  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 44/120
	I0731 20:02:56.794839  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 45/120
	I0731 20:02:57.796204  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 46/120
	I0731 20:02:58.798087  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 47/120
	I0731 20:02:59.799566  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 48/120
	I0731 20:03:00.800878  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 49/120
	I0731 20:03:01.803122  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 50/120
	I0731 20:03:02.804576  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 51/120
	I0731 20:03:03.805995  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 52/120
	I0731 20:03:04.807888  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 53/120
	I0731 20:03:05.809367  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 54/120
	I0731 20:03:06.811222  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 55/120
	I0731 20:03:07.813065  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 56/120
	I0731 20:03:08.814554  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 57/120
	I0731 20:03:09.815916  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 58/120
	I0731 20:03:10.817498  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 59/120
	I0731 20:03:11.819477  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 60/120
	I0731 20:03:12.821188  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 61/120
	I0731 20:03:13.822762  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 62/120
	I0731 20:03:14.824124  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 63/120
	I0731 20:03:15.825745  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 64/120
	I0731 20:03:16.827518  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 65/120
	I0731 20:03:17.828992  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 66/120
	I0731 20:03:18.830261  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 67/120
	I0731 20:03:19.831950  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 68/120
	I0731 20:03:20.833566  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 69/120
	I0731 20:03:21.835888  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 70/120
	I0731 20:03:22.838239  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 71/120
	I0731 20:03:23.839481  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 72/120
	I0731 20:03:24.840604  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 73/120
	I0731 20:03:25.842737  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 74/120
	I0731 20:03:26.844591  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 75/120
	I0731 20:03:27.846175  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 76/120
	I0731 20:03:28.847553  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 77/120
	I0731 20:03:29.848988  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 78/120
	I0731 20:03:30.850381  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 79/120
	I0731 20:03:31.852665  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 80/120
	I0731 20:03:32.854094  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 81/120
	I0731 20:03:33.855416  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 82/120
	I0731 20:03:34.856712  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 83/120
	I0731 20:03:35.858898  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 84/120
	I0731 20:03:36.860839  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 85/120
	I0731 20:03:37.862535  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 86/120
	I0731 20:03:38.863697  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 87/120
	I0731 20:03:39.865256  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 88/120
	I0731 20:03:40.866925  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 89/120
	I0731 20:03:41.868916  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 90/120
	I0731 20:03:42.870261  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 91/120
	I0731 20:03:43.871549  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 92/120
	I0731 20:03:44.873209  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 93/120
	I0731 20:03:45.874616  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 94/120
	I0731 20:03:46.876666  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 95/120
	I0731 20:03:47.878126  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 96/120
	I0731 20:03:48.879795  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 97/120
	I0731 20:03:49.881594  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 98/120
	I0731 20:03:50.883038  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 99/120
	I0731 20:03:51.884894  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 100/120
	I0731 20:03:52.886436  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 101/120
	I0731 20:03:53.887808  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 102/120
	I0731 20:03:54.889098  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 103/120
	I0731 20:03:55.891179  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 104/120
	I0731 20:03:56.893069  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 105/120
	I0731 20:03:57.894412  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 106/120
	I0731 20:03:58.895757  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 107/120
	I0731 20:03:59.897390  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 108/120
	I0731 20:04:00.898703  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 109/120
	I0731 20:04:01.901162  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 110/120
	I0731 20:04:02.903376  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 111/120
	I0731 20:04:03.904700  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 112/120
	I0731 20:04:04.906211  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 113/120
	I0731 20:04:05.908082  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 114/120
	I0731 20:04:06.909567  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 115/120
	I0731 20:04:07.911938  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 116/120
	I0731 20:04:08.913322  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 117/120
	I0731 20:04:09.914671  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 118/120
	I0731 20:04:10.915965  148258 main.go:141] libmachine: (ha-235073-m04) Waiting for machine to stop 119/120
	I0731 20:04:11.917165  148258 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0731 20:04:11.917253  148258 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0731 20:04:11.919171  148258 out.go:177] 
	W0731 20:04:11.920764  148258 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0731 20:04:11.920788  148258 out.go:239] * 
	* 
	W0731 20:04:11.923223  148258 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 20:04:11.924694  148258 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-235073 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-235073 status -v=7 --alsologtostderr: exit status 3 (19.023334054s)

                                                
                                                
-- stdout --
	ha-235073
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-235073-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-235073-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 20:04:11.974652  148687 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:04:11.974790  148687 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:04:11.974802  148687 out.go:304] Setting ErrFile to fd 2...
	I0731 20:04:11.974808  148687 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:04:11.975007  148687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 20:04:11.975178  148687 out.go:298] Setting JSON to false
	I0731 20:04:11.975202  148687 mustload.go:65] Loading cluster: ha-235073
	I0731 20:04:11.975301  148687 notify.go:220] Checking for updates...
	I0731 20:04:11.975572  148687 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:04:11.975589  148687 status.go:255] checking status of ha-235073 ...
	I0731 20:04:11.976036  148687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:04:11.976091  148687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:04:11.996027  148687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40013
	I0731 20:04:11.996538  148687 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:04:11.997147  148687 main.go:141] libmachine: Using API Version  1
	I0731 20:04:11.997167  148687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:04:11.997597  148687 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:04:11.997833  148687 main.go:141] libmachine: (ha-235073) Calling .GetState
	I0731 20:04:11.999811  148687 status.go:330] ha-235073 host status = "Running" (err=<nil>)
	I0731 20:04:11.999836  148687 host.go:66] Checking if "ha-235073" exists ...
	I0731 20:04:12.000137  148687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:04:12.000176  148687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:04:12.016305  148687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40631
	I0731 20:04:12.016711  148687 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:04:12.017225  148687 main.go:141] libmachine: Using API Version  1
	I0731 20:04:12.017248  148687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:04:12.017690  148687 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:04:12.017873  148687 main.go:141] libmachine: (ha-235073) Calling .GetIP
	I0731 20:04:12.020806  148687 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 20:04:12.021292  148687 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 20:04:12.021327  148687 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 20:04:12.021490  148687 host.go:66] Checking if "ha-235073" exists ...
	I0731 20:04:12.021844  148687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:04:12.021884  148687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:04:12.037042  148687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39975
	I0731 20:04:12.037511  148687 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:04:12.038003  148687 main.go:141] libmachine: Using API Version  1
	I0731 20:04:12.038037  148687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:04:12.038384  148687 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:04:12.038594  148687 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 20:04:12.038871  148687 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:04:12.038899  148687 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 20:04:12.041711  148687 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 20:04:12.042147  148687 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 20:04:12.042175  148687 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 20:04:12.042290  148687 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 20:04:12.042495  148687 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 20:04:12.042680  148687 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 20:04:12.042834  148687 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 20:04:12.123379  148687 ssh_runner.go:195] Run: systemctl --version
	I0731 20:04:12.130750  148687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:04:12.149692  148687 kubeconfig.go:125] found "ha-235073" server: "https://192.168.39.254:8443"
	I0731 20:04:12.149728  148687 api_server.go:166] Checking apiserver status ...
	I0731 20:04:12.149776  148687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:04:12.168078  148687 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5142/cgroup
	W0731 20:04:12.181390  148687 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5142/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:04:12.181444  148687 ssh_runner.go:195] Run: ls
	I0731 20:04:12.190653  148687 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 20:04:12.195394  148687 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 20:04:12.195420  148687 status.go:422] ha-235073 apiserver status = Running (err=<nil>)
	I0731 20:04:12.195430  148687 status.go:257] ha-235073 status: &{Name:ha-235073 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 20:04:12.195446  148687 status.go:255] checking status of ha-235073-m02 ...
	I0731 20:04:12.195746  148687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:04:12.195784  148687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:04:12.210890  148687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44811
	I0731 20:04:12.211357  148687 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:04:12.211844  148687 main.go:141] libmachine: Using API Version  1
	I0731 20:04:12.211866  148687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:04:12.212200  148687 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:04:12.212397  148687 main.go:141] libmachine: (ha-235073-m02) Calling .GetState
	I0731 20:04:12.214103  148687 status.go:330] ha-235073-m02 host status = "Running" (err=<nil>)
	I0731 20:04:12.214121  148687 host.go:66] Checking if "ha-235073-m02" exists ...
	I0731 20:04:12.214518  148687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:04:12.214557  148687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:04:12.230037  148687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32847
	I0731 20:04:12.230530  148687 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:04:12.231092  148687 main.go:141] libmachine: Using API Version  1
	I0731 20:04:12.231121  148687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:04:12.231441  148687 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:04:12.231659  148687 main.go:141] libmachine: (ha-235073-m02) Calling .GetIP
	I0731 20:04:12.234605  148687 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 20:04:12.235136  148687 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:59:11 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 20:04:12.235172  148687 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 20:04:12.235313  148687 host.go:66] Checking if "ha-235073-m02" exists ...
	I0731 20:04:12.235658  148687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:04:12.235704  148687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:04:12.251153  148687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35311
	I0731 20:04:12.251657  148687 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:04:12.252159  148687 main.go:141] libmachine: Using API Version  1
	I0731 20:04:12.252181  148687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:04:12.252521  148687 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:04:12.252769  148687 main.go:141] libmachine: (ha-235073-m02) Calling .DriverName
	I0731 20:04:12.253028  148687 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:04:12.253050  148687 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHHostname
	I0731 20:04:12.255938  148687 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 20:04:12.256326  148687 main.go:141] libmachine: (ha-235073-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:fe:7b", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:59:11 +0000 UTC Type:0 Mac:52:54:00:41:fe:7b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-235073-m02 Clientid:01:52:54:00:41:fe:7b}
	I0731 20:04:12.256353  148687 main.go:141] libmachine: (ha-235073-m02) DBG | domain ha-235073-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:41:fe:7b in network mk-ha-235073
	I0731 20:04:12.256556  148687 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHPort
	I0731 20:04:12.256741  148687 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHKeyPath
	I0731 20:04:12.256940  148687 main.go:141] libmachine: (ha-235073-m02) Calling .GetSSHUsername
	I0731 20:04:12.257110  148687 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m02/id_rsa Username:docker}
	I0731 20:04:12.347634  148687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:04:12.365831  148687 kubeconfig.go:125] found "ha-235073" server: "https://192.168.39.254:8443"
	I0731 20:04:12.365859  148687 api_server.go:166] Checking apiserver status ...
	I0731 20:04:12.365906  148687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:04:12.382802  148687 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1372/cgroup
	W0731 20:04:12.392686  148687 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1372/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:04:12.392754  148687 ssh_runner.go:195] Run: ls
	I0731 20:04:12.397812  148687 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 20:04:12.402436  148687 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 20:04:12.402465  148687 status.go:422] ha-235073-m02 apiserver status = Running (err=<nil>)
	I0731 20:04:12.402476  148687 status.go:257] ha-235073-m02 status: &{Name:ha-235073-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 20:04:12.402496  148687 status.go:255] checking status of ha-235073-m04 ...
	I0731 20:04:12.402781  148687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:04:12.402824  148687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:04:12.417930  148687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43737
	I0731 20:04:12.418360  148687 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:04:12.418818  148687 main.go:141] libmachine: Using API Version  1
	I0731 20:04:12.418837  148687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:04:12.419110  148687 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:04:12.419277  148687 main.go:141] libmachine: (ha-235073-m04) Calling .GetState
	I0731 20:04:12.420928  148687 status.go:330] ha-235073-m04 host status = "Running" (err=<nil>)
	I0731 20:04:12.420947  148687 host.go:66] Checking if "ha-235073-m04" exists ...
	I0731 20:04:12.421362  148687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:04:12.421410  148687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:04:12.436702  148687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38033
	I0731 20:04:12.437155  148687 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:04:12.437863  148687 main.go:141] libmachine: Using API Version  1
	I0731 20:04:12.437890  148687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:04:12.438262  148687 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:04:12.438467  148687 main.go:141] libmachine: (ha-235073-m04) Calling .GetIP
	I0731 20:04:12.441277  148687 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 20:04:12.441717  148687 main.go:141] libmachine: (ha-235073-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:7d:83", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 21:01:39 +0000 UTC Type:0 Mac:52:54:00:cc:7d:83 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-235073-m04 Clientid:01:52:54:00:cc:7d:83}
	I0731 20:04:12.441740  148687 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined IP address 192.168.39.62 and MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 20:04:12.441905  148687 host.go:66] Checking if "ha-235073-m04" exists ...
	I0731 20:04:12.442203  148687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:04:12.442245  148687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:04:12.457737  148687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36783
	I0731 20:04:12.458169  148687 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:04:12.458786  148687 main.go:141] libmachine: Using API Version  1
	I0731 20:04:12.458806  148687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:04:12.459130  148687 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:04:12.459349  148687 main.go:141] libmachine: (ha-235073-m04) Calling .DriverName
	I0731 20:04:12.459564  148687 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:04:12.459589  148687 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHHostname
	I0731 20:04:12.462723  148687 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 20:04:12.463169  148687 main.go:141] libmachine: (ha-235073-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:7d:83", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 21:01:39 +0000 UTC Type:0 Mac:52:54:00:cc:7d:83 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-235073-m04 Clientid:01:52:54:00:cc:7d:83}
	I0731 20:04:12.463197  148687 main.go:141] libmachine: (ha-235073-m04) DBG | domain ha-235073-m04 has defined IP address 192.168.39.62 and MAC address 52:54:00:cc:7d:83 in network mk-ha-235073
	I0731 20:04:12.463319  148687 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHPort
	I0731 20:04:12.463478  148687 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHKeyPath
	I0731 20:04:12.463639  148687 main.go:141] libmachine: (ha-235073-m04) Calling .GetSSHUsername
	I0731 20:04:12.463839  148687 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073-m04/id_rsa Username:docker}
	W0731 20:04:30.949589  148687 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.62:22: connect: no route to host
	W0731 20:04:30.949707  148687 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.62:22: connect: no route to host
	E0731 20:04:30.949728  148687 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.62:22: connect: no route to host
	I0731 20:04:30.949745  148687 status.go:257] ha-235073-m04 status: &{Name:ha-235073-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0731 20:04:30.949772  148687 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.62:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-235073 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-235073 -n ha-235073
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-235073 logs -n 25: (1.726915799s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-235073 ssh -n ha-235073-m02 sudo cat                                          | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | /home/docker/cp-test_ha-235073-m03_ha-235073-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-235073 cp ha-235073-m03:/home/docker/cp-test.txt                              | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m04:/home/docker/cp-test_ha-235073-m03_ha-235073-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n                                                                 | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n ha-235073-m04 sudo cat                                          | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | /home/docker/cp-test_ha-235073-m03_ha-235073-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-235073 cp testdata/cp-test.txt                                                | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n                                                                 | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-235073 cp ha-235073-m04:/home/docker/cp-test.txt                              | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3796763680/001/cp-test_ha-235073-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n                                                                 | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-235073 cp ha-235073-m04:/home/docker/cp-test.txt                              | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073:/home/docker/cp-test_ha-235073-m04_ha-235073.txt                       |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n                                                                 | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n ha-235073 sudo cat                                              | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | /home/docker/cp-test_ha-235073-m04_ha-235073.txt                                 |           |         |         |                     |                     |
	| cp      | ha-235073 cp ha-235073-m04:/home/docker/cp-test.txt                              | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m02:/home/docker/cp-test_ha-235073-m04_ha-235073-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n                                                                 | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n ha-235073-m02 sudo cat                                          | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | /home/docker/cp-test_ha-235073-m04_ha-235073-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-235073 cp ha-235073-m04:/home/docker/cp-test.txt                              | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m03:/home/docker/cp-test_ha-235073-m04_ha-235073-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n                                                                 | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | ha-235073-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-235073 ssh -n ha-235073-m03 sudo cat                                          | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC | 31 Jul 24 19:51 UTC |
	|         | /home/docker/cp-test_ha-235073-m04_ha-235073-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-235073 node stop m02 -v=7                                                     | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-235073 node start m02 -v=7                                                    | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:54 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-235073 -v=7                                                           | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:55 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-235073 -v=7                                                                | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:55 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-235073 --wait=true -v=7                                                    | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 19:57 UTC | 31 Jul 24 20:01 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-235073                                                                | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 20:01 UTC |                     |
	| node    | ha-235073 node delete m03 -v=7                                                   | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 20:01 UTC | 31 Jul 24 20:02 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-235073 stop -v=7                                                              | ha-235073 | jenkins | v1.33.1 | 31 Jul 24 20:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 19:57:18
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 19:57:18.126314  146425 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:57:18.126578  146425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:57:18.126587  146425 out.go:304] Setting ErrFile to fd 2...
	I0731 19:57:18.126591  146425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:57:18.126792  146425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 19:57:18.127416  146425 out.go:298] Setting JSON to false
	I0731 19:57:18.128313  146425 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5974,"bootTime":1722449864,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 19:57:18.128373  146425 start.go:139] virtualization: kvm guest
	I0731 19:57:18.130640  146425 out.go:177] * [ha-235073] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 19:57:18.132306  146425 notify.go:220] Checking for updates...
	I0731 19:57:18.132348  146425 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 19:57:18.133853  146425 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 19:57:18.135421  146425 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 19:57:18.136790  146425 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 19:57:18.138038  146425 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 19:57:18.139283  146425 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 19:57:18.140839  146425 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:57:18.140959  146425 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 19:57:18.141421  146425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:57:18.141502  146425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:57:18.156558  146425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44935
	I0731 19:57:18.157040  146425 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:57:18.157665  146425 main.go:141] libmachine: Using API Version  1
	I0731 19:57:18.157688  146425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:57:18.158069  146425 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:57:18.158239  146425 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:57:18.191407  146425 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 19:57:18.192837  146425 start.go:297] selected driver: kvm2
	I0731 19:57:18.192854  146425 start.go:901] validating driver "kvm2" against &{Name:ha-235073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-235073 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.136 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.62 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:57:18.192997  146425 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 19:57:18.193360  146425 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:57:18.193435  146425 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-121704/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 19:57:18.207551  146425 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 19:57:18.208316  146425 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 19:57:18.208352  146425 cni.go:84] Creating CNI manager for ""
	I0731 19:57:18.208359  146425 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0731 19:57:18.208426  146425 start.go:340] cluster config:
	{Name:ha-235073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-235073 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.136 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.62 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:57:18.208544  146425 iso.go:125] acquiring lock: {Name:mk69fc0fe37180b19ce91307b5e12ab2d0bd69fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:57:18.210414  146425 out.go:177] * Starting "ha-235073" primary control-plane node in "ha-235073" cluster
	I0731 19:57:18.211712  146425 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 19:57:18.211748  146425 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 19:57:18.211757  146425 cache.go:56] Caching tarball of preloaded images
	I0731 19:57:18.211844  146425 preload.go:172] Found /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 19:57:18.211856  146425 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 19:57:18.211965  146425 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/config.json ...
	I0731 19:57:18.212162  146425 start.go:360] acquireMachinesLock for ha-235073: {Name:mk0ee20c9dba367bb5e62f6affdfe6f589095d2a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 19:57:18.212203  146425 start.go:364] duration metric: took 24.6µs to acquireMachinesLock for "ha-235073"
	I0731 19:57:18.212217  146425 start.go:96] Skipping create...Using existing machine configuration
	I0731 19:57:18.212225  146425 fix.go:54] fixHost starting: 
	I0731 19:57:18.212510  146425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:57:18.212542  146425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:57:18.226281  146425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41823
	I0731 19:57:18.226750  146425 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:57:18.227203  146425 main.go:141] libmachine: Using API Version  1
	I0731 19:57:18.227220  146425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:57:18.227597  146425 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:57:18.227772  146425 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:57:18.227937  146425 main.go:141] libmachine: (ha-235073) Calling .GetState
	I0731 19:57:18.229194  146425 fix.go:112] recreateIfNeeded on ha-235073: state=Running err=<nil>
	W0731 19:57:18.229208  146425 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 19:57:18.230989  146425 out.go:177] * Updating the running kvm2 "ha-235073" VM ...
	I0731 19:57:18.232248  146425 machine.go:94] provisionDockerMachine start ...
	I0731 19:57:18.232263  146425 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:57:18.232499  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:57:18.234930  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:57:18.235356  146425 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:57:18.235412  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:57:18.235563  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:57:18.235748  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:57:18.235926  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:57:18.236096  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:57:18.236240  146425 main.go:141] libmachine: Using SSH client type: native
	I0731 19:57:18.236417  146425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0731 19:57:18.236429  146425 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 19:57:18.338585  146425 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-235073
	
	I0731 19:57:18.338616  146425 main.go:141] libmachine: (ha-235073) Calling .GetMachineName
	I0731 19:57:18.338888  146425 buildroot.go:166] provisioning hostname "ha-235073"
	I0731 19:57:18.338917  146425 main.go:141] libmachine: (ha-235073) Calling .GetMachineName
	I0731 19:57:18.339100  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:57:18.341400  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:57:18.341778  146425 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:57:18.341808  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:57:18.341946  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:57:18.342145  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:57:18.342306  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:57:18.342456  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:57:18.342628  146425 main.go:141] libmachine: Using SSH client type: native
	I0731 19:57:18.342813  146425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0731 19:57:18.342825  146425 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-235073 && echo "ha-235073" | sudo tee /etc/hostname
	I0731 19:57:18.456809  146425 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-235073
	
	I0731 19:57:18.456836  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:57:18.459556  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:57:18.459948  146425 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:57:18.459982  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:57:18.460185  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:57:18.460399  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:57:18.460556  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:57:18.460689  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:57:18.460829  146425 main.go:141] libmachine: Using SSH client type: native
	I0731 19:57:18.461014  146425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0731 19:57:18.461036  146425 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-235073' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-235073/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-235073' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 19:57:18.566301  146425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 19:57:18.566338  146425 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 19:57:18.566421  146425 buildroot.go:174] setting up certificates
	I0731 19:57:18.566435  146425 provision.go:84] configureAuth start
	I0731 19:57:18.566454  146425 main.go:141] libmachine: (ha-235073) Calling .GetMachineName
	I0731 19:57:18.566776  146425 main.go:141] libmachine: (ha-235073) Calling .GetIP
	I0731 19:57:18.569304  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:57:18.569698  146425 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:57:18.569739  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:57:18.569834  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:57:18.571970  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:57:18.572311  146425 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:57:18.572340  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:57:18.572468  146425 provision.go:143] copyHostCerts
	I0731 19:57:18.572508  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 19:57:18.572551  146425 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 19:57:18.572564  146425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 19:57:18.572644  146425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 19:57:18.572755  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 19:57:18.572781  146425 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 19:57:18.572786  146425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 19:57:18.572820  146425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 19:57:18.572954  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 19:57:18.572981  146425 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 19:57:18.572990  146425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 19:57:18.573029  146425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 19:57:18.573111  146425 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.ha-235073 san=[127.0.0.1 192.168.39.146 ha-235073 localhost minikube]
	I0731 19:57:18.818409  146425 provision.go:177] copyRemoteCerts
	I0731 19:57:18.818478  146425 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 19:57:18.818527  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:57:18.821064  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:57:18.821493  146425 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:57:18.821522  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:57:18.821700  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:57:18.821893  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:57:18.822055  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:57:18.822162  146425 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:57:18.900229  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 19:57:18.900307  146425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 19:57:18.924721  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 19:57:18.924794  146425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0731 19:57:18.948208  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 19:57:18.948287  146425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 19:57:18.971950  146425 provision.go:87] duration metric: took 405.496261ms to configureAuth
	I0731 19:57:18.971983  146425 buildroot.go:189] setting minikube options for container-runtime
	I0731 19:57:18.972184  146425 config.go:182] Loaded profile config "ha-235073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:57:18.972252  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:57:18.974968  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:57:18.975326  146425 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:57:18.975354  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:57:18.975530  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:57:18.975742  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:57:18.975903  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:57:18.976060  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:57:18.976253  146425 main.go:141] libmachine: Using SSH client type: native
	I0731 19:57:18.976458  146425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0731 19:57:18.976475  146425 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 19:58:49.906740  146425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 19:58:49.906771  146425 machine.go:97] duration metric: took 1m31.674510536s to provisionDockerMachine
	I0731 19:58:49.906784  146425 start.go:293] postStartSetup for "ha-235073" (driver="kvm2")
	I0731 19:58:49.906796  146425 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 19:58:49.906829  146425 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:58:49.907140  146425 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 19:58:49.907165  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:58:49.910091  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:58:49.910503  146425 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:58:49.910527  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:58:49.910719  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:58:49.910918  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:58:49.911097  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:58:49.911243  146425 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:58:49.993402  146425 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 19:58:49.997622  146425 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 19:58:49.997648  146425 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 19:58:49.997719  146425 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 19:58:49.997795  146425 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 19:58:49.997807  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> /etc/ssl/certs/1288912.pem
	I0731 19:58:49.997919  146425 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 19:58:50.007626  146425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 19:58:50.032273  146425 start.go:296] duration metric: took 125.474871ms for postStartSetup
	I0731 19:58:50.032312  146425 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:58:50.032585  146425 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0731 19:58:50.032608  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:58:50.035057  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:58:50.035444  146425 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:58:50.035474  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:58:50.035639  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:58:50.035817  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:58:50.035973  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:58:50.036113  146425 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	W0731 19:58:50.116220  146425 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0731 19:58:50.116248  146425 fix.go:56] duration metric: took 1m31.904023426s for fixHost
	I0731 19:58:50.116270  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:58:50.118815  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:58:50.119321  146425 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:58:50.119351  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:58:50.119552  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:58:50.119744  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:58:50.119905  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:58:50.120062  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:58:50.120234  146425 main.go:141] libmachine: Using SSH client type: native
	I0731 19:58:50.120434  146425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.146 22 <nil> <nil>}
	I0731 19:58:50.120449  146425 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 19:58:50.218209  146425 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722455930.163022549
	
	I0731 19:58:50.218229  146425 fix.go:216] guest clock: 1722455930.163022549
	I0731 19:58:50.218239  146425 fix.go:229] Guest: 2024-07-31 19:58:50.163022549 +0000 UTC Remote: 2024-07-31 19:58:50.116256006 +0000 UTC m=+92.026454219 (delta=46.766543ms)
	I0731 19:58:50.218264  146425 fix.go:200] guest clock delta is within tolerance: 46.766543ms
	I0731 19:58:50.218272  146425 start.go:83] releasing machines lock for "ha-235073", held for 1m32.006059256s
	I0731 19:58:50.218296  146425 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:58:50.218570  146425 main.go:141] libmachine: (ha-235073) Calling .GetIP
	I0731 19:58:50.221278  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:58:50.221654  146425 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:58:50.221672  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:58:50.221827  146425 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:58:50.222297  146425 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:58:50.222462  146425 main.go:141] libmachine: (ha-235073) Calling .DriverName
	I0731 19:58:50.222538  146425 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 19:58:50.222588  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:58:50.222632  146425 ssh_runner.go:195] Run: cat /version.json
	I0731 19:58:50.222651  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHHostname
	I0731 19:58:50.225215  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:58:50.225371  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:58:50.225590  146425 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:58:50.225614  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:58:50.225689  146425 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:58:50.225709  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:58:50.225750  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:58:50.225877  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHPort
	I0731 19:58:50.225959  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:58:50.226024  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHKeyPath
	I0731 19:58:50.226123  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:58:50.226184  146425 main.go:141] libmachine: (ha-235073) Calling .GetSSHUsername
	I0731 19:58:50.226301  146425 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:58:50.226358  146425 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/ha-235073/id_rsa Username:docker}
	I0731 19:58:50.299001  146425 ssh_runner.go:195] Run: systemctl --version
	I0731 19:58:50.322573  146425 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 19:58:50.480720  146425 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 19:58:50.488117  146425 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 19:58:50.488180  146425 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 19:58:50.497571  146425 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0731 19:58:50.497592  146425 start.go:495] detecting cgroup driver to use...
	I0731 19:58:50.497656  146425 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 19:58:50.513412  146425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 19:58:50.527207  146425 docker.go:217] disabling cri-docker service (if available) ...
	I0731 19:58:50.527276  146425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 19:58:50.541500  146425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 19:58:50.554909  146425 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 19:58:50.708744  146425 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 19:58:50.852347  146425 docker.go:233] disabling docker service ...
	I0731 19:58:50.852439  146425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 19:58:50.869186  146425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 19:58:50.884046  146425 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 19:58:51.028216  146425 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 19:58:51.172713  146425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 19:58:51.186354  146425 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 19:58:51.205923  146425 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 19:58:51.205993  146425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:58:51.216143  146425 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 19:58:51.216214  146425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:58:51.226402  146425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:58:51.237655  146425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:58:51.248392  146425 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 19:58:51.258883  146425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:58:51.268989  146425 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:58:51.280655  146425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:58:51.290990  146425 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 19:58:51.300490  146425 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 19:58:51.309736  146425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:58:51.453094  146425 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 19:58:58.988616  146425 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.535487725s)
	I0731 19:58:58.988642  146425 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 19:58:58.988688  146425 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 19:58:58.993969  146425 start.go:563] Will wait 60s for crictl version
	I0731 19:58:58.994027  146425 ssh_runner.go:195] Run: which crictl
	I0731 19:58:58.998184  146425 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 19:58:59.035495  146425 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 19:58:59.035577  146425 ssh_runner.go:195] Run: crio --version
	I0731 19:58:59.064772  146425 ssh_runner.go:195] Run: crio --version
	I0731 19:58:59.097863  146425 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 19:58:59.099450  146425 main.go:141] libmachine: (ha-235073) Calling .GetIP
	I0731 19:58:59.102204  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:58:59.102611  146425 main.go:141] libmachine: (ha-235073) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:60:31", ip: ""} in network mk-ha-235073: {Iface:virbr1 ExpiryTime:2024-07-31 20:46:12 +0000 UTC Type:0 Mac:52:54:00:81:60:31 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:ha-235073 Clientid:01:52:54:00:81:60:31}
	I0731 19:58:59.102638  146425 main.go:141] libmachine: (ha-235073) DBG | domain ha-235073 has defined IP address 192.168.39.146 and MAC address 52:54:00:81:60:31 in network mk-ha-235073
	I0731 19:58:59.102863  146425 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 19:58:59.107827  146425 kubeadm.go:883] updating cluster {Name:ha-235073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-235073 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.136 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.62 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 19:58:59.107951  146425 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 19:58:59.107991  146425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 19:58:59.153073  146425 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 19:58:59.153094  146425 crio.go:433] Images already preloaded, skipping extraction
	I0731 19:58:59.153141  146425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 19:58:59.187839  146425 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 19:58:59.187864  146425 cache_images.go:84] Images are preloaded, skipping loading
	I0731 19:58:59.187873  146425 kubeadm.go:934] updating node { 192.168.39.146 8443 v1.30.3 crio true true} ...
	I0731 19:58:59.187969  146425 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-235073 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-235073 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 19:58:59.188050  146425 ssh_runner.go:195] Run: crio config
	I0731 19:58:59.244157  146425 cni.go:84] Creating CNI manager for ""
	I0731 19:58:59.244175  146425 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0731 19:58:59.244185  146425 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 19:58:59.244207  146425 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.146 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-235073 NodeName:ha-235073 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.146"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.146 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 19:58:59.244329  146425 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.146
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-235073"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.146
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.146"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 19:58:59.244351  146425 kube-vip.go:115] generating kube-vip config ...
	I0731 19:58:59.244391  146425 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 19:58:59.255931  146425 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 19:58:59.256024  146425 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 19:58:59.256076  146425 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 19:58:59.265207  146425 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 19:58:59.265273  146425 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0731 19:58:59.274872  146425 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0731 19:58:59.291698  146425 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 19:58:59.308502  146425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0731 19:58:59.324621  146425 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0731 19:58:59.340786  146425 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 19:58:59.345966  146425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:58:59.504089  146425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 19:58:59.519033  146425 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073 for IP: 192.168.39.146
	I0731 19:58:59.519060  146425 certs.go:194] generating shared ca certs ...
	I0731 19:58:59.519082  146425 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:58:59.519288  146425 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 19:58:59.519333  146425 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 19:58:59.519344  146425 certs.go:256] generating profile certs ...
	I0731 19:58:59.519424  146425 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/client.key
	I0731 19:58:59.519451  146425 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key.f0fda48b
	I0731 19:58:59.519470  146425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt.f0fda48b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.146 192.168.39.102 192.168.39.136 192.168.39.254]
	I0731 19:58:59.732199  146425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt.f0fda48b ...
	I0731 19:58:59.732230  146425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt.f0fda48b: {Name:mk0d0eff6286966b5094c7180b8ed30b860af134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:58:59.732415  146425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key.f0fda48b ...
	I0731 19:58:59.732428  146425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key.f0fda48b: {Name:mkddf010c68b82230fff7a059326ba0136a59a1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:58:59.732506  146425 certs.go:381] copying /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt.f0fda48b -> /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt
	I0731 19:58:59.732647  146425 certs.go:385] copying /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key.f0fda48b -> /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key
	I0731 19:58:59.732774  146425 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.key
	I0731 19:58:59.732791  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 19:58:59.732803  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 19:58:59.732817  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 19:58:59.732829  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 19:58:59.732841  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 19:58:59.732853  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 19:58:59.732863  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 19:58:59.732873  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 19:58:59.732934  146425 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 19:58:59.732962  146425 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 19:58:59.732971  146425 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 19:58:59.732993  146425 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 19:58:59.733014  146425 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 19:58:59.733035  146425 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 19:58:59.733071  146425 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 19:58:59.733098  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> /usr/share/ca-certificates/1288912.pem
	I0731 19:58:59.733112  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:58:59.733124  146425 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem -> /usr/share/ca-certificates/128891.pem
	I0731 19:58:59.733667  146425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 19:58:59.759184  146425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 19:58:59.782868  146425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 19:58:59.806827  146425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 19:58:59.830076  146425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0731 19:58:59.854461  146425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 19:58:59.877953  146425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 19:58:59.901800  146425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/ha-235073/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 19:58:59.925119  146425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 19:58:59.947869  146425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 19:58:59.971595  146425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 19:58:59.994473  146425 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 19:59:00.011519  146425 ssh_runner.go:195] Run: openssl version
	I0731 19:59:00.017395  146425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 19:59:00.028661  146425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 19:59:00.033047  146425 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 19:59:00.033093  146425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 19:59:00.038794  146425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 19:59:00.048532  146425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 19:59:00.059359  146425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 19:59:00.064174  146425 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 19:59:00.064232  146425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 19:59:00.070223  146425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 19:59:00.079819  146425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 19:59:00.090323  146425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:59:00.094497  146425 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:59:00.094563  146425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:59:00.100068  146425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 19:59:00.109810  146425 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 19:59:00.114269  146425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 19:59:00.119830  146425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 19:59:00.125412  146425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 19:59:00.131074  146425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 19:59:00.137161  146425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 19:59:00.142918  146425 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 19:59:00.148449  146425 kubeadm.go:392] StartCluster: {Name:ha-235073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-235073 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.146 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.136 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.62 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:59:00.148605  146425 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 19:59:00.148685  146425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 19:59:00.189822  146425 cri.go:89] found id: "84ebfd404aca6326bf68b0b8238e99a5ec5adb72818319637ec19cfcbe8631e4"
	I0731 19:59:00.189846  146425 cri.go:89] found id: "aee7190231c2884f881211e16e64da0273c102ce1b3256ddedf8a18954fcdcb2"
	I0731 19:59:00.189851  146425 cri.go:89] found id: "54f9febcea6106d9cd695ee7e37e0333d85f3158a67944dcf43a24aaab1a3672"
	I0731 19:59:00.189854  146425 cri.go:89] found id: "3881bd1062c2997bb583fb122a03ed65b220c1c102b0d2ec1599b5be1d9f6e81"
	I0731 19:59:00.189857  146425 cri.go:89] found id: "a9ddbd3f3cc5f71da35c6165c799a35b8d72e224de7a0c4687a173f4de880f22"
	I0731 19:59:00.189860  146425 cri.go:89] found id: "30540ee956135e961a2eeabdc4f234f18455f75bb21b66afeef232ca2805dd90"
	I0731 19:59:00.189863  146425 cri.go:89] found id: "ee50c4b9e239496652640464181ee3dd4a9d21d5a5123d1d17fbb7a50e29dc1a"
	I0731 19:59:00.189865  146425 cri.go:89] found id: "8811952c6253860c880f1b6a403fc858c3e6399e99e9a825cf0b05f791ebf3ac"
	I0731 19:59:00.189868  146425 cri.go:89] found id: "c31d2ba10cadb13f4b888c49e2a6934e94344684dfc2adf6833c2d1dc0993929"
	I0731 19:59:00.189873  146425 cri.go:89] found id: "9d642debf242fb07cc792132a55eddbb1c15f26311a8d754a5d8fe06c8b598ae"
	I0731 19:59:00.189875  146425 cri.go:89] found id: "216984c6b7d59f01f104316e0505bcc5baf47b8d8b5b230f6641c7dd73533498"
	I0731 19:59:00.189878  146425 cri.go:89] found id: "cf0877f308475d05ee771157aab5de9f3da07eec38a21c9a74d76bde2eb4de77"
	I0731 19:59:00.189881  146425 cri.go:89] found id: "c6ae1a1aafd356067a53de9e770b37736ea4c621cb6bf29821cca1c4488aa31e"
	I0731 19:59:00.189883  146425 cri.go:89] found id: ""
	I0731 19:59:00.189924  146425 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 31 20:04:31 ha-235073 crio[3874]: time="2024-07-31 20:04:31.545956621Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722456271545931270,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=04b87fa1-ccbe-4497-9520-e2c793e0e191 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:04:31 ha-235073 crio[3874]: time="2024-07-31 20:04:31.546668257Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c17808bd-57c1-484d-ae37-d768618affac name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:04:31 ha-235073 crio[3874]: time="2024-07-31 20:04:31.546751124Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c17808bd-57c1-484d-ae37-d768618affac name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:04:31 ha-235073 crio[3874]: time="2024-07-31 20:04:31.547225203Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6fc6ee68a8ccfb01d95ed85dec112703b54962234bae1d676aa89616fd0d648,PodSandboxId:7a15c9a6957d27279b13841546d71646bf6377b358918e753a298bc3c210ac04,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722456046839314283,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cd9bb70-badc-4b4b-a135-62644edac7dd,},Annotations:map[string]string{io.kubernetes.container.hash: b5b39576,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7502ddf2c06deb62269b97a51c20850ac0228229029f4bf9f8ef9523e50ec52,PodSandboxId:faf476c4b677c80e51a2caa13ffeddd2527ae685f9dd4f8f3a69a86375ef3751,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722455985843331084,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a308afa2b137aad975d5e22dcabd17,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d1ba8b7107cb1bb158c11842ebcf14895a00b9078c118782deb224da5f52857,PodSandboxId:12599677e4703009288f3e1ebb26cef5d2d92ff75dc4d34f0862b423231967e5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722455979135587699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-g9vds,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1b34d06-e944-4236-afe0-1ee06ba4e666,},Annotations:map[string]string{io.kubernetes.container.hash: 212b9e5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60d3b03e3fca1412fdfe4a1336d714af079600794b5d69b97e45212778ac386,PodSandboxId:ae02c905beb3110e13beec060353e8a54bbbbb5fbd4dc4698dd906387257b502,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722455978455073735,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f5277261cc3e0ac6eb43af812478f1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c44f6367a3b0ea167ac08a9af6d2d1fa3d461c8d9327717846fb62a5557e9c2c,PodSandboxId:58c637ddc0deb5375375c3cebc48c63bec8c194a4b291d4a0efb90bceefc1b88,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722455960808358994,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 357a848381e2b4246b93417e0d0fd8a7,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:791841acf25442adb2adc892c7cd5548bb63d8bfccaaebf860d004aee02b6080,PodSandboxId:7a15c9a6957d27279b13841546d71646bf6377b358918e753a298bc3c210ac04,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722455946133407561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cd9bb70-badc-4b4b-a135-62644edac7dd,},Annotations:map[string]string{io.kubernetes.container.hash: b5b39576,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7e5af865d5da458640b3360a4da109eb53e95c35c3b5a12f9446af71c28680c,PodSandboxId:71e733d0386996f8415fcd8f9dca7d182b370c12a5d83983e5aa863ef3a11e3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722455945896815817,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-td8j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b836edfa-4df1-40e4-a58a-3f23afd5b78b,},Annotations:map[string]string{io.kubernetes.container.hash: 33a934d8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:9c2b76f0b85b953ff01f0545cceb7e2fb48507448aed0678ac4371e65cd98c56,PodSandboxId:0264a243f9156fcf1716437242b46a73547a11738e3bddc34a598244d83b6db4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722455945975413462,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2w7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47597b4-a38b-438c-9c3b-8f7f45130f75,},Annotations:map[string]string{io.kubernetes.container.hash: b29b8cc6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5156fa7e1ef427ac5b1607e7451d7295b3ae7c49569d43a25303797272b761c9,PodSandboxId:4660c263a94b86c191ea6d914602653108603338f4ca0526650406de37c88ddd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722455945930399014,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f7dzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9549b5d7-bb23-4934-883b-dd07f8d864d8,},Annotations:map[string]string{io.kubernetes.container.hash: b9bec004,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:097aa4401bf259c32e8722fb7124782087d94b805245dee0e8d2760aec8daf4d,PodSandboxId:fe0a920941145e7dd18da34c0b434129669a34948d10f6f7e3e3e0b1465c05ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722455945760657460,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-235073,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b69e7963c2d0df8833554e4876687c49,},Annotations:map[string]string{io.kubernetes.container.hash: d16169dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af170fcfb9dc8411f1d8fbed048ee4eb4418d5442c02d777d6b8f4e7be30867,PodSandboxId:25c8bdc2c5ffd9917b05ef670d88081b4ac4474ccc2b30d2a90b38c56bb204a7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722455945806972829,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6mpsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1910
ba34-e9d4-4fb4-9f2b-6b00dad3a3ef,},Annotations:map[string]string{io.kubernetes.container.hash: 390f5316,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5519732046627c4a96cdc2e2575d18c859b61afc81a835def1808fcdfb47a5,PodSandboxId:ae02c905beb3110e13beec060353e8a54bbbbb5fbd4dc4698dd906387257b502,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722455945827966617,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 16f5277261cc3e0ac6eb43af812478f1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af36b29bca740794d7c0b4e50678dfd727788c6c5af5ef49b306441037b9027c,PodSandboxId:faf476c4b677c80e51a2caa13ffeddd2527ae685f9dd4f8f3a69a86375ef3751,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722455945730424491,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a308afa2b137aad
975d5e22dcabd17,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:787234b628452985dd01b9eeae1a07be3f75c788f421c79acb1dc55a4f0cb1bd,PodSandboxId:64211b1205b16f2e0a1cf98f66401dd8bce4ccefa11fcaf473420341b6277383,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722455945528476575,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adddf646550f2cb39fef0b7f6c02c656,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36d67125ccdbad5f98a9142c81bc6585651ec4059eed554dfbe1f5cb5be99c60,PodSandboxId:6c4d1efc4989e0b2aa28c3cbda2d3f5d4b5e9252f3d7895297f54307dc7ab9f6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722455438711854968,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-g9vds,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1b34d06-e944-4236-afe0-1ee06ba4e666,},Annot
ations:map[string]string{io.kubernetes.container.hash: 212b9e5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ddbd3f3cc5f71da35c6165c799a35b8d72e224de7a0c4687a173f4de880f22,PodSandboxId:55ec4971c2e64ac2d6f9784d622423e7577ab445afbc4f722a284ada62c68dc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722455228102941927,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2w7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47597b4-a38b-438c-9c3b-8f7f45130f75,},Annotations:map[string]string{io.kube
rnetes.container.hash: b29b8cc6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30540ee956135e961a2eeabdc4f234f18455f75bb21b66afeef232ca2805dd90,PodSandboxId:231aebfc0631bd83287e33a69afbca01d1373f4b939d8bf65f4ac794aaf52012,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722455228031302004,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f7dzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9549b5d7-bb23-4934-883b-dd07f8d864d8,},Annotations:map[string]string{io.kubernetes.container.hash: b9bec004,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee50c4b9e239496652640464181ee3dd4a9d21d5a5123d1d17fbb7a50e29dc1a,PodSandboxId:feeccc2a1a3e7406eaea5ea90a171a8661853a60ee426f70434f26aac0f00112,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722455215945193946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6mpsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1910ba34-e9d4-4fb4-9f2b-6b00dad3a3ef,},Annotations:map[string]string{io.kubernetes.container.hash: 390f5316,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8811952c6253860c880f1b6a403fc858c3e6399e99e9a825cf0b05f791ebf3ac,PodSandboxId:dbf6b114c5cb5c19782cdc6e76a5915ae8e26cd82af58e80687fa0f5d3e199fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722455211859741609,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-td8j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b836edfa-4df1-40e4-a58a-3f23afd5b78b,},Annotations:map[string]string{io.kubernetes.container.hash: 33a934d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d642debf242fb07cc792132a55eddbb1c15f26311a8d754a5d8fe06c8b598ae,PodSandboxId:58bfb1289eb0474e39bf94aa0b78076816c68c357787822692fe3617a82226cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722455191498044732,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b69e7963c2d0df8833554e4876687c49,},Annotations:map[string]string{io.kubernetes.container.hash: d16169dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216984c6b7d59f01f104316e0505bcc5baf47b8d8b5b230f6641c7dd73533498,PodSandboxId:c9f1bb2690babd5d10c24de1404c57fafca2597594caac9dd0c659d72a8b552d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722455191481588976,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adddf646550f2cb39fef0b7f6c02c656,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c17808bd-57c1-484d-ae37-d768618affac name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:04:31 ha-235073 crio[3874]: time="2024-07-31 20:04:31.600214895Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=28e5b6c4-bbf0-489c-a0bd-4978c0df19ae name=/runtime.v1.RuntimeService/Version
	Jul 31 20:04:31 ha-235073 crio[3874]: time="2024-07-31 20:04:31.600312440Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=28e5b6c4-bbf0-489c-a0bd-4978c0df19ae name=/runtime.v1.RuntimeService/Version
	Jul 31 20:04:31 ha-235073 crio[3874]: time="2024-07-31 20:04:31.601597103Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4470f76f-27fc-4b55-be3c-84e342a605e5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:04:31 ha-235073 crio[3874]: time="2024-07-31 20:04:31.602088953Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722456271602062523,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4470f76f-27fc-4b55-be3c-84e342a605e5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:04:31 ha-235073 crio[3874]: time="2024-07-31 20:04:31.603228465Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5992daf8-7d87-47db-93c6-230ef34cf76d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:04:31 ha-235073 crio[3874]: time="2024-07-31 20:04:31.603313310Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5992daf8-7d87-47db-93c6-230ef34cf76d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:04:31 ha-235073 crio[3874]: time="2024-07-31 20:04:31.603998411Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6fc6ee68a8ccfb01d95ed85dec112703b54962234bae1d676aa89616fd0d648,PodSandboxId:7a15c9a6957d27279b13841546d71646bf6377b358918e753a298bc3c210ac04,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722456046839314283,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cd9bb70-badc-4b4b-a135-62644edac7dd,},Annotations:map[string]string{io.kubernetes.container.hash: b5b39576,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7502ddf2c06deb62269b97a51c20850ac0228229029f4bf9f8ef9523e50ec52,PodSandboxId:faf476c4b677c80e51a2caa13ffeddd2527ae685f9dd4f8f3a69a86375ef3751,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722455985843331084,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a308afa2b137aad975d5e22dcabd17,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d1ba8b7107cb1bb158c11842ebcf14895a00b9078c118782deb224da5f52857,PodSandboxId:12599677e4703009288f3e1ebb26cef5d2d92ff75dc4d34f0862b423231967e5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722455979135587699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-g9vds,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1b34d06-e944-4236-afe0-1ee06ba4e666,},Annotations:map[string]string{io.kubernetes.container.hash: 212b9e5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60d3b03e3fca1412fdfe4a1336d714af079600794b5d69b97e45212778ac386,PodSandboxId:ae02c905beb3110e13beec060353e8a54bbbbb5fbd4dc4698dd906387257b502,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722455978455073735,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f5277261cc3e0ac6eb43af812478f1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c44f6367a3b0ea167ac08a9af6d2d1fa3d461c8d9327717846fb62a5557e9c2c,PodSandboxId:58c637ddc0deb5375375c3cebc48c63bec8c194a4b291d4a0efb90bceefc1b88,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722455960808358994,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 357a848381e2b4246b93417e0d0fd8a7,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:791841acf25442adb2adc892c7cd5548bb63d8bfccaaebf860d004aee02b6080,PodSandboxId:7a15c9a6957d27279b13841546d71646bf6377b358918e753a298bc3c210ac04,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722455946133407561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cd9bb70-badc-4b4b-a135-62644edac7dd,},Annotations:map[string]string{io.kubernetes.container.hash: b5b39576,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7e5af865d5da458640b3360a4da109eb53e95c35c3b5a12f9446af71c28680c,PodSandboxId:71e733d0386996f8415fcd8f9dca7d182b370c12a5d83983e5aa863ef3a11e3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722455945896815817,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-td8j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b836edfa-4df1-40e4-a58a-3f23afd5b78b,},Annotations:map[string]string{io.kubernetes.container.hash: 33a934d8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:9c2b76f0b85b953ff01f0545cceb7e2fb48507448aed0678ac4371e65cd98c56,PodSandboxId:0264a243f9156fcf1716437242b46a73547a11738e3bddc34a598244d83b6db4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722455945975413462,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2w7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47597b4-a38b-438c-9c3b-8f7f45130f75,},Annotations:map[string]string{io.kubernetes.container.hash: b29b8cc6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5156fa7e1ef427ac5b1607e7451d7295b3ae7c49569d43a25303797272b761c9,PodSandboxId:4660c263a94b86c191ea6d914602653108603338f4ca0526650406de37c88ddd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722455945930399014,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f7dzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9549b5d7-bb23-4934-883b-dd07f8d864d8,},Annotations:map[string]string{io.kubernetes.container.hash: b9bec004,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:097aa4401bf259c32e8722fb7124782087d94b805245dee0e8d2760aec8daf4d,PodSandboxId:fe0a920941145e7dd18da34c0b434129669a34948d10f6f7e3e3e0b1465c05ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722455945760657460,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-235073,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b69e7963c2d0df8833554e4876687c49,},Annotations:map[string]string{io.kubernetes.container.hash: d16169dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af170fcfb9dc8411f1d8fbed048ee4eb4418d5442c02d777d6b8f4e7be30867,PodSandboxId:25c8bdc2c5ffd9917b05ef670d88081b4ac4474ccc2b30d2a90b38c56bb204a7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722455945806972829,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6mpsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1910
ba34-e9d4-4fb4-9f2b-6b00dad3a3ef,},Annotations:map[string]string{io.kubernetes.container.hash: 390f5316,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5519732046627c4a96cdc2e2575d18c859b61afc81a835def1808fcdfb47a5,PodSandboxId:ae02c905beb3110e13beec060353e8a54bbbbb5fbd4dc4698dd906387257b502,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722455945827966617,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 16f5277261cc3e0ac6eb43af812478f1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af36b29bca740794d7c0b4e50678dfd727788c6c5af5ef49b306441037b9027c,PodSandboxId:faf476c4b677c80e51a2caa13ffeddd2527ae685f9dd4f8f3a69a86375ef3751,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722455945730424491,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a308afa2b137aad
975d5e22dcabd17,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:787234b628452985dd01b9eeae1a07be3f75c788f421c79acb1dc55a4f0cb1bd,PodSandboxId:64211b1205b16f2e0a1cf98f66401dd8bce4ccefa11fcaf473420341b6277383,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722455945528476575,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adddf646550f2cb39fef0b7f6c02c656,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36d67125ccdbad5f98a9142c81bc6585651ec4059eed554dfbe1f5cb5be99c60,PodSandboxId:6c4d1efc4989e0b2aa28c3cbda2d3f5d4b5e9252f3d7895297f54307dc7ab9f6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722455438711854968,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-g9vds,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1b34d06-e944-4236-afe0-1ee06ba4e666,},Annot
ations:map[string]string{io.kubernetes.container.hash: 212b9e5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ddbd3f3cc5f71da35c6165c799a35b8d72e224de7a0c4687a173f4de880f22,PodSandboxId:55ec4971c2e64ac2d6f9784d622423e7577ab445afbc4f722a284ada62c68dc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722455228102941927,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2w7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47597b4-a38b-438c-9c3b-8f7f45130f75,},Annotations:map[string]string{io.kube
rnetes.container.hash: b29b8cc6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30540ee956135e961a2eeabdc4f234f18455f75bb21b66afeef232ca2805dd90,PodSandboxId:231aebfc0631bd83287e33a69afbca01d1373f4b939d8bf65f4ac794aaf52012,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722455228031302004,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f7dzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9549b5d7-bb23-4934-883b-dd07f8d864d8,},Annotations:map[string]string{io.kubernetes.container.hash: b9bec004,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee50c4b9e239496652640464181ee3dd4a9d21d5a5123d1d17fbb7a50e29dc1a,PodSandboxId:feeccc2a1a3e7406eaea5ea90a171a8661853a60ee426f70434f26aac0f00112,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722455215945193946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6mpsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1910ba34-e9d4-4fb4-9f2b-6b00dad3a3ef,},Annotations:map[string]string{io.kubernetes.container.hash: 390f5316,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8811952c6253860c880f1b6a403fc858c3e6399e99e9a825cf0b05f791ebf3ac,PodSandboxId:dbf6b114c5cb5c19782cdc6e76a5915ae8e26cd82af58e80687fa0f5d3e199fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722455211859741609,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-td8j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b836edfa-4df1-40e4-a58a-3f23afd5b78b,},Annotations:map[string]string{io.kubernetes.container.hash: 33a934d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d642debf242fb07cc792132a55eddbb1c15f26311a8d754a5d8fe06c8b598ae,PodSandboxId:58bfb1289eb0474e39bf94aa0b78076816c68c357787822692fe3617a82226cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722455191498044732,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b69e7963c2d0df8833554e4876687c49,},Annotations:map[string]string{io.kubernetes.container.hash: d16169dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216984c6b7d59f01f104316e0505bcc5baf47b8d8b5b230f6641c7dd73533498,PodSandboxId:c9f1bb2690babd5d10c24de1404c57fafca2597594caac9dd0c659d72a8b552d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722455191481588976,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adddf646550f2cb39fef0b7f6c02c656,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5992daf8-7d87-47db-93c6-230ef34cf76d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:04:31 ha-235073 crio[3874]: time="2024-07-31 20:04:31.649366288Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=459e41e7-753e-4403-967f-4c1e730e00fc name=/runtime.v1.RuntimeService/Version
	Jul 31 20:04:31 ha-235073 crio[3874]: time="2024-07-31 20:04:31.649454204Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=459e41e7-753e-4403-967f-4c1e730e00fc name=/runtime.v1.RuntimeService/Version
	Jul 31 20:04:31 ha-235073 crio[3874]: time="2024-07-31 20:04:31.650542926Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1802700f-b371-419b-b8c6-6407f93e2911 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:04:31 ha-235073 crio[3874]: time="2024-07-31 20:04:31.650987922Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722456271650965526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1802700f-b371-419b-b8c6-6407f93e2911 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:04:31 ha-235073 crio[3874]: time="2024-07-31 20:04:31.651542592Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3fdbf6d-9079-4f59-a2ca-140e8fe257ca name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:04:31 ha-235073 crio[3874]: time="2024-07-31 20:04:31.651602057Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3fdbf6d-9079-4f59-a2ca-140e8fe257ca name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:04:31 ha-235073 crio[3874]: time="2024-07-31 20:04:31.652021490Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6fc6ee68a8ccfb01d95ed85dec112703b54962234bae1d676aa89616fd0d648,PodSandboxId:7a15c9a6957d27279b13841546d71646bf6377b358918e753a298bc3c210ac04,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722456046839314283,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cd9bb70-badc-4b4b-a135-62644edac7dd,},Annotations:map[string]string{io.kubernetes.container.hash: b5b39576,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7502ddf2c06deb62269b97a51c20850ac0228229029f4bf9f8ef9523e50ec52,PodSandboxId:faf476c4b677c80e51a2caa13ffeddd2527ae685f9dd4f8f3a69a86375ef3751,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722455985843331084,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a308afa2b137aad975d5e22dcabd17,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d1ba8b7107cb1bb158c11842ebcf14895a00b9078c118782deb224da5f52857,PodSandboxId:12599677e4703009288f3e1ebb26cef5d2d92ff75dc4d34f0862b423231967e5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722455979135587699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-g9vds,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1b34d06-e944-4236-afe0-1ee06ba4e666,},Annotations:map[string]string{io.kubernetes.container.hash: 212b9e5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60d3b03e3fca1412fdfe4a1336d714af079600794b5d69b97e45212778ac386,PodSandboxId:ae02c905beb3110e13beec060353e8a54bbbbb5fbd4dc4698dd906387257b502,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722455978455073735,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f5277261cc3e0ac6eb43af812478f1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c44f6367a3b0ea167ac08a9af6d2d1fa3d461c8d9327717846fb62a5557e9c2c,PodSandboxId:58c637ddc0deb5375375c3cebc48c63bec8c194a4b291d4a0efb90bceefc1b88,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722455960808358994,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 357a848381e2b4246b93417e0d0fd8a7,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:791841acf25442adb2adc892c7cd5548bb63d8bfccaaebf860d004aee02b6080,PodSandboxId:7a15c9a6957d27279b13841546d71646bf6377b358918e753a298bc3c210ac04,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722455946133407561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cd9bb70-badc-4b4b-a135-62644edac7dd,},Annotations:map[string]string{io.kubernetes.container.hash: b5b39576,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7e5af865d5da458640b3360a4da109eb53e95c35c3b5a12f9446af71c28680c,PodSandboxId:71e733d0386996f8415fcd8f9dca7d182b370c12a5d83983e5aa863ef3a11e3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722455945896815817,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-td8j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b836edfa-4df1-40e4-a58a-3f23afd5b78b,},Annotations:map[string]string{io.kubernetes.container.hash: 33a934d8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:9c2b76f0b85b953ff01f0545cceb7e2fb48507448aed0678ac4371e65cd98c56,PodSandboxId:0264a243f9156fcf1716437242b46a73547a11738e3bddc34a598244d83b6db4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722455945975413462,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2w7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47597b4-a38b-438c-9c3b-8f7f45130f75,},Annotations:map[string]string{io.kubernetes.container.hash: b29b8cc6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5156fa7e1ef427ac5b1607e7451d7295b3ae7c49569d43a25303797272b761c9,PodSandboxId:4660c263a94b86c191ea6d914602653108603338f4ca0526650406de37c88ddd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722455945930399014,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f7dzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9549b5d7-bb23-4934-883b-dd07f8d864d8,},Annotations:map[string]string{io.kubernetes.container.hash: b9bec004,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:097aa4401bf259c32e8722fb7124782087d94b805245dee0e8d2760aec8daf4d,PodSandboxId:fe0a920941145e7dd18da34c0b434129669a34948d10f6f7e3e3e0b1465c05ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722455945760657460,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-235073,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b69e7963c2d0df8833554e4876687c49,},Annotations:map[string]string{io.kubernetes.container.hash: d16169dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af170fcfb9dc8411f1d8fbed048ee4eb4418d5442c02d777d6b8f4e7be30867,PodSandboxId:25c8bdc2c5ffd9917b05ef670d88081b4ac4474ccc2b30d2a90b38c56bb204a7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722455945806972829,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6mpsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1910
ba34-e9d4-4fb4-9f2b-6b00dad3a3ef,},Annotations:map[string]string{io.kubernetes.container.hash: 390f5316,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5519732046627c4a96cdc2e2575d18c859b61afc81a835def1808fcdfb47a5,PodSandboxId:ae02c905beb3110e13beec060353e8a54bbbbb5fbd4dc4698dd906387257b502,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722455945827966617,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 16f5277261cc3e0ac6eb43af812478f1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af36b29bca740794d7c0b4e50678dfd727788c6c5af5ef49b306441037b9027c,PodSandboxId:faf476c4b677c80e51a2caa13ffeddd2527ae685f9dd4f8f3a69a86375ef3751,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722455945730424491,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a308afa2b137aad
975d5e22dcabd17,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:787234b628452985dd01b9eeae1a07be3f75c788f421c79acb1dc55a4f0cb1bd,PodSandboxId:64211b1205b16f2e0a1cf98f66401dd8bce4ccefa11fcaf473420341b6277383,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722455945528476575,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adddf646550f2cb39fef0b7f6c02c656,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36d67125ccdbad5f98a9142c81bc6585651ec4059eed554dfbe1f5cb5be99c60,PodSandboxId:6c4d1efc4989e0b2aa28c3cbda2d3f5d4b5e9252f3d7895297f54307dc7ab9f6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722455438711854968,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-g9vds,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1b34d06-e944-4236-afe0-1ee06ba4e666,},Annot
ations:map[string]string{io.kubernetes.container.hash: 212b9e5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ddbd3f3cc5f71da35c6165c799a35b8d72e224de7a0c4687a173f4de880f22,PodSandboxId:55ec4971c2e64ac2d6f9784d622423e7577ab445afbc4f722a284ada62c68dc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722455228102941927,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2w7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47597b4-a38b-438c-9c3b-8f7f45130f75,},Annotations:map[string]string{io.kube
rnetes.container.hash: b29b8cc6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30540ee956135e961a2eeabdc4f234f18455f75bb21b66afeef232ca2805dd90,PodSandboxId:231aebfc0631bd83287e33a69afbca01d1373f4b939d8bf65f4ac794aaf52012,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722455228031302004,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f7dzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9549b5d7-bb23-4934-883b-dd07f8d864d8,},Annotations:map[string]string{io.kubernetes.container.hash: b9bec004,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee50c4b9e239496652640464181ee3dd4a9d21d5a5123d1d17fbb7a50e29dc1a,PodSandboxId:feeccc2a1a3e7406eaea5ea90a171a8661853a60ee426f70434f26aac0f00112,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722455215945193946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6mpsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1910ba34-e9d4-4fb4-9f2b-6b00dad3a3ef,},Annotations:map[string]string{io.kubernetes.container.hash: 390f5316,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8811952c6253860c880f1b6a403fc858c3e6399e99e9a825cf0b05f791ebf3ac,PodSandboxId:dbf6b114c5cb5c19782cdc6e76a5915ae8e26cd82af58e80687fa0f5d3e199fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722455211859741609,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-td8j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b836edfa-4df1-40e4-a58a-3f23afd5b78b,},Annotations:map[string]string{io.kubernetes.container.hash: 33a934d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d642debf242fb07cc792132a55eddbb1c15f26311a8d754a5d8fe06c8b598ae,PodSandboxId:58bfb1289eb0474e39bf94aa0b78076816c68c357787822692fe3617a82226cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722455191498044732,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b69e7963c2d0df8833554e4876687c49,},Annotations:map[string]string{io.kubernetes.container.hash: d16169dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216984c6b7d59f01f104316e0505bcc5baf47b8d8b5b230f6641c7dd73533498,PodSandboxId:c9f1bb2690babd5d10c24de1404c57fafca2597594caac9dd0c659d72a8b552d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722455191481588976,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adddf646550f2cb39fef0b7f6c02c656,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e3fdbf6d-9079-4f59-a2ca-140e8fe257ca name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:04:31 ha-235073 crio[3874]: time="2024-07-31 20:04:31.697363871Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fa1b7a39-d461-4644-96ae-3240634735a8 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:04:31 ha-235073 crio[3874]: time="2024-07-31 20:04:31.697457539Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fa1b7a39-d461-4644-96ae-3240634735a8 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:04:31 ha-235073 crio[3874]: time="2024-07-31 20:04:31.698653862Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=baa1c276-8ad3-4e41-803f-282699a87631 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:04:31 ha-235073 crio[3874]: time="2024-07-31 20:04:31.699097530Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722456271699075443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=baa1c276-8ad3-4e41-803f-282699a87631 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:04:31 ha-235073 crio[3874]: time="2024-07-31 20:04:31.699564208Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=13320b03-bdea-449e-938f-c1634f584ef3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:04:31 ha-235073 crio[3874]: time="2024-07-31 20:04:31.699633731Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=13320b03-bdea-449e-938f-c1634f584ef3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:04:31 ha-235073 crio[3874]: time="2024-07-31 20:04:31.700067171Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6fc6ee68a8ccfb01d95ed85dec112703b54962234bae1d676aa89616fd0d648,PodSandboxId:7a15c9a6957d27279b13841546d71646bf6377b358918e753a298bc3c210ac04,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722456046839314283,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cd9bb70-badc-4b4b-a135-62644edac7dd,},Annotations:map[string]string{io.kubernetes.container.hash: b5b39576,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7502ddf2c06deb62269b97a51c20850ac0228229029f4bf9f8ef9523e50ec52,PodSandboxId:faf476c4b677c80e51a2caa13ffeddd2527ae685f9dd4f8f3a69a86375ef3751,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722455985843331084,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a308afa2b137aad975d5e22dcabd17,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d1ba8b7107cb1bb158c11842ebcf14895a00b9078c118782deb224da5f52857,PodSandboxId:12599677e4703009288f3e1ebb26cef5d2d92ff75dc4d34f0862b423231967e5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722455979135587699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-g9vds,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1b34d06-e944-4236-afe0-1ee06ba4e666,},Annotations:map[string]string{io.kubernetes.container.hash: 212b9e5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60d3b03e3fca1412fdfe4a1336d714af079600794b5d69b97e45212778ac386,PodSandboxId:ae02c905beb3110e13beec060353e8a54bbbbb5fbd4dc4698dd906387257b502,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722455978455073735,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f5277261cc3e0ac6eb43af812478f1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c44f6367a3b0ea167ac08a9af6d2d1fa3d461c8d9327717846fb62a5557e9c2c,PodSandboxId:58c637ddc0deb5375375c3cebc48c63bec8c194a4b291d4a0efb90bceefc1b88,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722455960808358994,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 357a848381e2b4246b93417e0d0fd8a7,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:791841acf25442adb2adc892c7cd5548bb63d8bfccaaebf860d004aee02b6080,PodSandboxId:7a15c9a6957d27279b13841546d71646bf6377b358918e753a298bc3c210ac04,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722455946133407561,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cd9bb70-badc-4b4b-a135-62644edac7dd,},Annotations:map[string]string{io.kubernetes.container.hash: b5b39576,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7e5af865d5da458640b3360a4da109eb53e95c35c3b5a12f9446af71c28680c,PodSandboxId:71e733d0386996f8415fcd8f9dca7d182b370c12a5d83983e5aa863ef3a11e3f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722455945896815817,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-td8j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b836edfa-4df1-40e4-a58a-3f23afd5b78b,},Annotations:map[string]string{io.kubernetes.container.hash: 33a934d8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:9c2b76f0b85b953ff01f0545cceb7e2fb48507448aed0678ac4371e65cd98c56,PodSandboxId:0264a243f9156fcf1716437242b46a73547a11738e3bddc34a598244d83b6db4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722455945975413462,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2w7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47597b4-a38b-438c-9c3b-8f7f45130f75,},Annotations:map[string]string{io.kubernetes.container.hash: b29b8cc6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5156fa7e1ef427ac5b1607e7451d7295b3ae7c49569d43a25303797272b761c9,PodSandboxId:4660c263a94b86c191ea6d914602653108603338f4ca0526650406de37c88ddd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722455945930399014,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f7dzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9549b5d7-bb23-4934-883b-dd07f8d864d8,},Annotations:map[string]string{io.kubernetes.container.hash: b9bec004,io.kubernetes.container.ports: [{\"na
me\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:097aa4401bf259c32e8722fb7124782087d94b805245dee0e8d2760aec8daf4d,PodSandboxId:fe0a920941145e7dd18da34c0b434129669a34948d10f6f7e3e3e0b1465c05ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722455945760657460,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-235073,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b69e7963c2d0df8833554e4876687c49,},Annotations:map[string]string{io.kubernetes.container.hash: d16169dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4af170fcfb9dc8411f1d8fbed048ee4eb4418d5442c02d777d6b8f4e7be30867,PodSandboxId:25c8bdc2c5ffd9917b05ef670d88081b4ac4474ccc2b30d2a90b38c56bb204a7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722455945806972829,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6mpsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1910
ba34-e9d4-4fb4-9f2b-6b00dad3a3ef,},Annotations:map[string]string{io.kubernetes.container.hash: 390f5316,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5519732046627c4a96cdc2e2575d18c859b61afc81a835def1808fcdfb47a5,PodSandboxId:ae02c905beb3110e13beec060353e8a54bbbbb5fbd4dc4698dd906387257b502,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722455945827966617,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 16f5277261cc3e0ac6eb43af812478f1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af36b29bca740794d7c0b4e50678dfd727788c6c5af5ef49b306441037b9027c,PodSandboxId:faf476c4b677c80e51a2caa13ffeddd2527ae685f9dd4f8f3a69a86375ef3751,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722455945730424491,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a308afa2b137aad
975d5e22dcabd17,},Annotations:map[string]string{io.kubernetes.container.hash: 95edc133,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:787234b628452985dd01b9eeae1a07be3f75c788f421c79acb1dc55a4f0cb1bd,PodSandboxId:64211b1205b16f2e0a1cf98f66401dd8bce4ccefa11fcaf473420341b6277383,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722455945528476575,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adddf646550f2cb39fef0b7f6c02c656,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36d67125ccdbad5f98a9142c81bc6585651ec4059eed554dfbe1f5cb5be99c60,PodSandboxId:6c4d1efc4989e0b2aa28c3cbda2d3f5d4b5e9252f3d7895297f54307dc7ab9f6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722455438711854968,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-g9vds,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1b34d06-e944-4236-afe0-1ee06ba4e666,},Annot
ations:map[string]string{io.kubernetes.container.hash: 212b9e5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ddbd3f3cc5f71da35c6165c799a35b8d72e224de7a0c4687a173f4de880f22,PodSandboxId:55ec4971c2e64ac2d6f9784d622423e7577ab445afbc4f722a284ada62c68dc6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722455228102941927,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d2w7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c47597b4-a38b-438c-9c3b-8f7f45130f75,},Annotations:map[string]string{io.kube
rnetes.container.hash: b29b8cc6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30540ee956135e961a2eeabdc4f234f18455f75bb21b66afeef232ca2805dd90,PodSandboxId:231aebfc0631bd83287e33a69afbca01d1373f4b939d8bf65f4ac794aaf52012,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722455228031302004,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f7dzt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9549b5d7-bb23-4934-883b-dd07f8d864d8,},Annotations:map[string]string{io.kubernetes.container.hash: b9bec004,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee50c4b9e239496652640464181ee3dd4a9d21d5a5123d1d17fbb7a50e29dc1a,PodSandboxId:feeccc2a1a3e7406eaea5ea90a171a8661853a60ee426f70434f26aac0f00112,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722455215945193946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6mpsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1910ba34-e9d4-4fb4-9f2b-6b00dad3a3ef,},Annotations:map[string]string{io.kubernetes.container.hash: 390f5316,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8811952c6253860c880f1b6a403fc858c3e6399e99e9a825cf0b05f791ebf3ac,PodSandboxId:dbf6b114c5cb5c19782cdc6e76a5915ae8e26cd82af58e80687fa0f5d3e199fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722455211859741609,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-td8j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b836edfa-4df1-40e4-a58a-3f23afd5b78b,},Annotations:map[string]string{io.kubernetes.container.hash: 33a934d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d642debf242fb07cc792132a55eddbb1c15f26311a8d754a5d8fe06c8b598ae,PodSandboxId:58bfb1289eb0474e39bf94aa0b78076816c68c357787822692fe3617a82226cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722455191498044732,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b69e7963c2d0df8833554e4876687c49,},Annotations:map[string]string{io.kubernetes.container.hash: d16169dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:216984c6b7d59f01f104316e0505bcc5baf47b8d8b5b230f6641c7dd73533498,PodSandboxId:c9f1bb2690babd5d10c24de1404c57fafca2597594caac9dd0c659d72a8b552d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722455191481588976,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-235073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adddf646550f2cb39fef0b7f6c02c656,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=13320b03-bdea-449e-938f-c1634f584ef3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b6fc6ee68a8cc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       5                   7a15c9a6957d2       storage-provisioner
	b7502ddf2c06d       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            3                   faf476c4b677c       kube-apiserver-ha-235073
	9d1ba8b7107cb       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   12599677e4703       busybox-fc5497c4f-g9vds
	f60d3b03e3fca       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   2                   ae02c905beb31       kube-controller-manager-ha-235073
	c44f6367a3b0e       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago       Running             kube-vip                  0                   58c637ddc0deb       kube-vip-ha-235073
	791841acf2544       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       4                   7a15c9a6957d2       storage-provisioner
	9c2b76f0b85b9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   0264a243f9156       coredns-7db6d8ff4d-d2w7q
	5156fa7e1ef42       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   4660c263a94b8       coredns-7db6d8ff4d-f7dzt
	e7e5af865d5da       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      5 minutes ago       Running             kube-proxy                1                   71e733d038699       kube-proxy-td8j2
	7b55197320466       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      5 minutes ago       Exited              kube-controller-manager   1                   ae02c905beb31       kube-controller-manager-ha-235073
	4af170fcfb9dc       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      5 minutes ago       Running             kindnet-cni               1                   25c8bdc2c5ffd       kindnet-6mpsn
	097aa4401bf25       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   fe0a920941145       etcd-ha-235073
	af36b29bca740       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      5 minutes ago       Exited              kube-apiserver            2                   faf476c4b677c       kube-apiserver-ha-235073
	787234b628452       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      5 minutes ago       Running             kube-scheduler            1                   64211b1205b16       kube-scheduler-ha-235073
	36d67125ccdba       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   6c4d1efc4989e       busybox-fc5497c4f-g9vds
	a9ddbd3f3cc5f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      17 minutes ago      Exited              coredns                   0                   55ec4971c2e64       coredns-7db6d8ff4d-d2w7q
	30540ee956135       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      17 minutes ago      Exited              coredns                   0                   231aebfc0631b       coredns-7db6d8ff4d-f7dzt
	ee50c4b9e2394       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    17 minutes ago      Exited              kindnet-cni               0                   feeccc2a1a3e7       kindnet-6mpsn
	8811952c62538       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      17 minutes ago      Exited              kube-proxy                0                   dbf6b114c5cb5       kube-proxy-td8j2
	9d642debf242f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      18 minutes ago      Exited              etcd                      0                   58bfb1289eb04       etcd-ha-235073
	216984c6b7d59       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      18 minutes ago      Exited              kube-scheduler            0                   c9f1bb2690bab       kube-scheduler-ha-235073
	
	
	==> coredns [30540ee956135e961a2eeabdc4f234f18455f75bb21b66afeef232ca2805dd90] <==
	[INFO] 10.244.1.2:60484 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143802s
	[INFO] 10.244.0.4:58480 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129276s
	[INFO] 10.244.2.2:36458 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001308986s
	[INFO] 10.244.2.2:48644 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094253s
	[INFO] 10.244.1.2:34972 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151042s
	[INFO] 10.244.1.2:32819 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096017s
	[INFO] 10.244.1.2:48157 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075225s
	[INFO] 10.244.0.4:54613 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084738s
	[INFO] 10.244.0.4:60576 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000829s
	[INFO] 10.244.2.2:36544 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164516s
	[INFO] 10.244.2.2:45708 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142016s
	[INFO] 10.244.2.2:40736 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110815s
	[INFO] 10.244.2.2:36751 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104862s
	[INFO] 10.244.1.2:54006 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000448605s
	[INFO] 10.244.1.2:59479 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121156s
	[INFO] 10.244.0.4:33169 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000051358s
	[INFO] 10.244.2.2:44195 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135177s
	[INFO] 10.244.2.2:36586 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000153451s
	[INFO] 10.244.2.2:56302 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000124509s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: Unexpected error when reading response body: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	
	
	==> coredns [5156fa7e1ef427ac5b1607e7451d7295b3ae7c49569d43a25303797272b761c9] <==
	Trace[1923712009]: [10.001563968s] [10.001563968s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43204->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[136503993]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 19:59:17.526) (total time: 10637ms):
	Trace[136503993]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43204->10.96.0.1:443: read: connection reset by peer 10637ms (19:59:28.164)
	Trace[136503993]: [10.637912164s] [10.637912164s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43204->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [9c2b76f0b85b953ff01f0545cceb7e2fb48507448aed0678ac4371e65cd98c56] <==
	Trace[1272621397]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (19:59:25.494)
	Trace[1272621397]: [10.001001244s] [10.001001244s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1819968330]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 19:59:15.532) (total time: 10000ms):
	Trace[1819968330]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (19:59:25.533)
	Trace[1819968330]: [10.000795402s] [10.000795402s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [a9ddbd3f3cc5f71da35c6165c799a35b8d72e224de7a0c4687a173f4de880f22] <==
	[INFO] 10.244.1.2:42728 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000124693s
	[INFO] 10.244.0.4:54532 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00008837s
	[INFO] 10.244.0.4:52959 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000063732s
	[INFO] 10.244.0.4:56087 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000045645s
	[INFO] 10.244.2.2:42350 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000130124s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=2004&timeout=6m53s&timeoutSeconds=413&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1981&timeout=9m18s&timeoutSeconds=558&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1976&timeout=9m39s&timeoutSeconds=579&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1230429363]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 19:57:04.688) (total time: 12454ms):
	Trace[1230429363]: ---"Objects listed" error:Unauthorized 12454ms (19:57:17.143)
	Trace[1230429363]: [12.454777329s] [12.454777329s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[440821777]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 19:57:05.074) (total time: 12069ms):
	Trace[440821777]: ---"Objects listed" error:Unauthorized 12069ms (19:57:17.143)
	Trace[440821777]: [12.069408161s] [12.069408161s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[1045963971]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 19:57:04.934) (total time: 12209ms):
	Trace[1045963971]: ---"Objects listed" error:Unauthorized 12209ms (19:57:17.144)
	Trace[1045963971]: [12.209618911s] [12.209618911s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-235073
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-235073
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825
	                    minikube.k8s.io/name=ha-235073
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T19_46_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 19:46:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-235073
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 20:04:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 19:59:48 +0000   Wed, 31 Jul 2024 19:46:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 19:59:48 +0000   Wed, 31 Jul 2024 19:46:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 19:59:48 +0000   Wed, 31 Jul 2024 19:46:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 19:59:48 +0000   Wed, 31 Jul 2024 19:47:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.146
	  Hostname:    ha-235073
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e35869b5bfb347c6a5e12e63b257d2a1
	  System UUID:                e35869b5-bfb3-47c6-a5e1-2e63b257d2a1
	  Boot ID:                    846162a9-11ef-48d0-b284-9320ff7be7d0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-g9vds              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-d2w7q             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 coredns-7db6d8ff4d-f7dzt             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 etcd-ha-235073                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-6mpsn                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	  kube-system                 kube-apiserver-ha-235073             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-ha-235073    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-td8j2                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-scheduler-ha-235073             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-vip-ha-235073                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 17m                    kube-proxy       
	  Normal   Starting                 4m43s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    17m                    kubelet          Node ha-235073 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 17m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  17m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  17m                    kubelet          Node ha-235073 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     17m                    kubelet          Node ha-235073 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17m                    node-controller  Node ha-235073 event: Registered Node ha-235073 in Controller
	  Normal   NodeReady                17m                    kubelet          Node ha-235073 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-235073 event: Registered Node ha-235073 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-235073 event: Registered Node ha-235073 in Controller
	  Warning  ContainerGCFailed        5m55s (x2 over 6m55s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m35s                  node-controller  Node ha-235073 event: Registered Node ha-235073 in Controller
	  Normal   RegisteredNode           4m32s                  node-controller  Node ha-235073 event: Registered Node ha-235073 in Controller
	  Normal   RegisteredNode           3m10s                  node-controller  Node ha-235073 event: Registered Node ha-235073 in Controller
	
	
	Name:               ha-235073-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-235073-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825
	                    minikube.k8s.io/name=ha-235073
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T19_48_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 19:48:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-235073-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 20:04:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 20:00:31 +0000   Wed, 31 Jul 2024 19:59:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 20:00:31 +0000   Wed, 31 Jul 2024 19:59:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 20:00:31 +0000   Wed, 31 Jul 2024 19:59:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 20:00:31 +0000   Wed, 31 Jul 2024 19:59:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-235073-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 55b090e5d4e04e9e843bceddcf4718db
	  System UUID:                55b090e5-d4e0-4e9e-843b-ceddcf4718db
	  Boot ID:                    9e2c3933-f78d-4425-a10f-bddde5be171c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-d7lpt                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-235073-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-v5g92                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-235073-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-235073-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-4g5ws                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-235073-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-235073-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m15s                kube-proxy       
	  Normal  Starting                 15m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)    kubelet          Node ha-235073-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)    kubelet          Node ha-235073-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)    kubelet          Node ha-235073-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                  node-controller  Node ha-235073-m02 event: Registered Node ha-235073-m02 in Controller
	  Normal  RegisteredNode           15m                  node-controller  Node ha-235073-m02 event: Registered Node ha-235073-m02 in Controller
	  Normal  RegisteredNode           14m                  node-controller  Node ha-235073-m02 event: Registered Node ha-235073-m02 in Controller
	  Normal  NodeNotReady             12m                  node-controller  Node ha-235073-m02 status is now: NodeNotReady
	  Normal  Starting                 5m9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m9s (x8 over 5m9s)  kubelet          Node ha-235073-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m9s (x8 over 5m9s)  kubelet          Node ha-235073-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m9s (x7 over 5m9s)  kubelet          Node ha-235073-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m35s                node-controller  Node ha-235073-m02 event: Registered Node ha-235073-m02 in Controller
	  Normal  RegisteredNode           4m32s                node-controller  Node ha-235073-m02 event: Registered Node ha-235073-m02 in Controller
	  Normal  RegisteredNode           3m10s                node-controller  Node ha-235073-m02 event: Registered Node ha-235073-m02 in Controller
	
	
	Name:               ha-235073-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-235073-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825
	                    minikube.k8s.io/name=ha-235073
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T19_51_11_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 19:51:10 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-235073-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 20:02:04 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 31 Jul 2024 20:01:44 +0000   Wed, 31 Jul 2024 20:02:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 31 Jul 2024 20:01:44 +0000   Wed, 31 Jul 2024 20:02:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 31 Jul 2024 20:01:44 +0000   Wed, 31 Jul 2024 20:02:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 31 Jul 2024 20:01:44 +0000   Wed, 31 Jul 2024 20:02:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.62
	  Hostname:    ha-235073-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f0f8c10839cf446c8b0628fe1b69511a
	  System UUID:                f0f8c108-39cf-446c-8b06-28fe1b69511a
	  Boot ID:                    bc3d5e39-2d7c-4054-8d3f-f9510e731678
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wv85m    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-2gzbj              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-jb89g           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-235073-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-235073-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-235073-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-235073-m04 event: Registered Node ha-235073-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-235073-m04 event: Registered Node ha-235073-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-235073-m04 event: Registered Node ha-235073-m04 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-235073-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m34s                  node-controller  Node ha-235073-m04 event: Registered Node ha-235073-m04 in Controller
	  Normal   RegisteredNode           4m32s                  node-controller  Node ha-235073-m04 event: Registered Node ha-235073-m04 in Controller
	  Normal   NodeNotReady             3m54s                  node-controller  Node ha-235073-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m10s                  node-controller  Node ha-235073-m04 event: Registered Node ha-235073-m04 in Controller
	  Normal   Starting                 2m49s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m48s (x2 over 2m48s)  kubelet          Node ha-235073-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x2 over 2m48s)  kubelet          Node ha-235073-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x2 over 2m48s)  kubelet          Node ha-235073-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m48s                  kubelet          Node ha-235073-m04 has been rebooted, boot id: bc3d5e39-2d7c-4054-8d3f-f9510e731678
	  Normal   NodeReady                2m48s                  kubelet          Node ha-235073-m04 status is now: NodeReady
	  Normal   NodeNotReady             107s                   node-controller  Node ha-235073-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.063310] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060385] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.158302] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.127644] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.264376] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.129943] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +5.303318] systemd-fstab-generator[955]: Ignoring "noauto" option for root device
	[  +0.056828] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.179861] kauditd_printk_skb: 74 callbacks suppressed
	[  +2.138103] systemd-fstab-generator[1381]: Ignoring "noauto" option for root device
	[  +5.414223] kauditd_printk_skb: 23 callbacks suppressed
	[ +13.822229] kauditd_printk_skb: 34 callbacks suppressed
	[Jul31 19:48] kauditd_printk_skb: 26 callbacks suppressed
	[Jul31 19:55] kauditd_printk_skb: 1 callbacks suppressed
	[Jul31 19:58] systemd-fstab-generator[3793]: Ignoring "noauto" option for root device
	[  +0.154362] systemd-fstab-generator[3805]: Ignoring "noauto" option for root device
	[  +0.175828] systemd-fstab-generator[3819]: Ignoring "noauto" option for root device
	[  +0.141578] systemd-fstab-generator[3831]: Ignoring "noauto" option for root device
	[  +0.283144] systemd-fstab-generator[3859]: Ignoring "noauto" option for root device
	[  +8.041173] systemd-fstab-generator[3963]: Ignoring "noauto" option for root device
	[  +0.091416] kauditd_printk_skb: 100 callbacks suppressed
	[Jul31 19:59] kauditd_printk_skb: 12 callbacks suppressed
	[ +12.504721] kauditd_printk_skb: 85 callbacks suppressed
	[ +10.073460] kauditd_printk_skb: 1 callbacks suppressed
	[ +18.050773] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [097aa4401bf259c32e8722fb7124782087d94b805245dee0e8d2760aec8daf4d] <==
	{"level":"info","ts":"2024-07-31T20:01:04.655918Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"fc85001aa37e7974","remote-peer-id":"da763d5f6f242eda"}
	{"level":"info","ts":"2024-07-31T20:01:04.663436Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"fc85001aa37e7974","to":"da763d5f6f242eda","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-31T20:01:04.663491Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"fc85001aa37e7974","remote-peer-id":"da763d5f6f242eda"}
	{"level":"info","ts":"2024-07-31T20:01:04.68604Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"fc85001aa37e7974","remote-peer-id":"da763d5f6f242eda"}
	{"level":"info","ts":"2024-07-31T20:01:04.686281Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fc85001aa37e7974","remote-peer-id":"da763d5f6f242eda"}
	{"level":"warn","ts":"2024-07-31T20:01:04.692349Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.136:45284","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-07-31T20:01:57.924712Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc85001aa37e7974 switched to configuration voters=(6295640380314472659 18195949983872481652)"}
	{"level":"info","ts":"2024-07-31T20:01:57.927089Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"25c4f0770a3181de","local-member-id":"fc85001aa37e7974","removed-remote-peer-id":"da763d5f6f242eda","removed-remote-peer-urls":["https://192.168.39.136:2380"]}
	{"level":"info","ts":"2024-07-31T20:01:57.927222Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"da763d5f6f242eda"}
	{"level":"warn","ts":"2024-07-31T20:01:57.927363Z","caller":"etcdserver/server.go:980","msg":"rejected Raft message from removed member","local-member-id":"fc85001aa37e7974","removed-member-id":"da763d5f6f242eda"}
	{"level":"warn","ts":"2024-07-31T20:01:57.927475Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"warn","ts":"2024-07-31T20:01:57.927543Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"da763d5f6f242eda"}
	{"level":"info","ts":"2024-07-31T20:01:57.927623Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"da763d5f6f242eda"}
	{"level":"warn","ts":"2024-07-31T20:01:57.92832Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"da763d5f6f242eda"}
	{"level":"info","ts":"2024-07-31T20:01:57.928385Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"da763d5f6f242eda"}
	{"level":"info","ts":"2024-07-31T20:01:57.928519Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fc85001aa37e7974","remote-peer-id":"da763d5f6f242eda"}
	{"level":"warn","ts":"2024-07-31T20:01:57.928773Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fc85001aa37e7974","remote-peer-id":"da763d5f6f242eda","error":"context canceled"}
	{"level":"warn","ts":"2024-07-31T20:01:57.928869Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"da763d5f6f242eda","error":"failed to read da763d5f6f242eda on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-07-31T20:01:57.928916Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fc85001aa37e7974","remote-peer-id":"da763d5f6f242eda"}
	{"level":"warn","ts":"2024-07-31T20:01:57.929395Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"fc85001aa37e7974","remote-peer-id":"da763d5f6f242eda","error":"http: read on closed response body"}
	{"level":"info","ts":"2024-07-31T20:01:57.929664Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fc85001aa37e7974","remote-peer-id":"da763d5f6f242eda"}
	{"level":"info","ts":"2024-07-31T20:01:57.929847Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"da763d5f6f242eda"}
	{"level":"info","ts":"2024-07-31T20:01:57.929895Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"fc85001aa37e7974","removed-remote-peer-id":"da763d5f6f242eda"}
	{"level":"warn","ts":"2024-07-31T20:01:57.949936Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"fc85001aa37e7974","remote-peer-id-stream-handler":"fc85001aa37e7974","remote-peer-id-from":"da763d5f6f242eda"}
	{"level":"warn","ts":"2024-07-31T20:01:57.960259Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"fc85001aa37e7974","remote-peer-id-stream-handler":"fc85001aa37e7974","remote-peer-id-from":"da763d5f6f242eda"}
	
	
	==> etcd [9d642debf242fb07cc792132a55eddbb1c15f26311a8d754a5d8fe06c8b598ae] <==
	2024/07/31 19:57:19 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/31 19:57:19 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/31 19:57:19 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/31 19:57:19 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-31T19:57:19.142176Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8751779449440129714,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-07-31T19:57:19.222068Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.146:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T19:57:19.222168Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.146:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-31T19:57:19.222388Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"fc85001aa37e7974","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-31T19:57:19.222542Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"575e9b91f63fd0d3"}
	{"level":"info","ts":"2024-07-31T19:57:19.222578Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"575e9b91f63fd0d3"}
	{"level":"info","ts":"2024-07-31T19:57:19.222601Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"575e9b91f63fd0d3"}
	{"level":"info","ts":"2024-07-31T19:57:19.222677Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3"}
	{"level":"info","ts":"2024-07-31T19:57:19.222798Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3"}
	{"level":"info","ts":"2024-07-31T19:57:19.222856Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fc85001aa37e7974","remote-peer-id":"575e9b91f63fd0d3"}
	{"level":"info","ts":"2024-07-31T19:57:19.222867Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"575e9b91f63fd0d3"}
	{"level":"info","ts":"2024-07-31T19:57:19.222873Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"da763d5f6f242eda"}
	{"level":"info","ts":"2024-07-31T19:57:19.222881Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"da763d5f6f242eda"}
	{"level":"info","ts":"2024-07-31T19:57:19.222904Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"da763d5f6f242eda"}
	{"level":"info","ts":"2024-07-31T19:57:19.222942Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fc85001aa37e7974","remote-peer-id":"da763d5f6f242eda"}
	{"level":"info","ts":"2024-07-31T19:57:19.222994Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fc85001aa37e7974","remote-peer-id":"da763d5f6f242eda"}
	{"level":"info","ts":"2024-07-31T19:57:19.223042Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fc85001aa37e7974","remote-peer-id":"da763d5f6f242eda"}
	{"level":"info","ts":"2024-07-31T19:57:19.223053Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"da763d5f6f242eda"}
	{"level":"info","ts":"2024-07-31T19:57:19.22568Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.146:2380"}
	{"level":"info","ts":"2024-07-31T19:57:19.225918Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.146:2380"}
	{"level":"info","ts":"2024-07-31T19:57:19.22597Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-235073","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.146:2380"],"advertise-client-urls":["https://192.168.39.146:2379"]}
	
	
	==> kernel <==
	 20:04:32 up 18 min,  0 users,  load average: 0.20, 0.44, 0.37
	Linux ha-235073 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4af170fcfb9dc8411f1d8fbed048ee4eb4418d5442c02d777d6b8f4e7be30867] <==
	I0731 20:03:46.990625       1 main.go:322] Node ha-235073-m02 has CIDR [10.244.1.0/24] 
	I0731 20:03:56.989480       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0731 20:03:56.989537       1 main.go:322] Node ha-235073-m04 has CIDR [10.244.3.0/24] 
	I0731 20:03:56.989672       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0731 20:03:56.989705       1 main.go:299] handling current node
	I0731 20:03:56.989721       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0731 20:03:56.989728       1 main.go:322] Node ha-235073-m02 has CIDR [10.244.1.0/24] 
	I0731 20:04:06.986749       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0731 20:04:06.986929       1 main.go:322] Node ha-235073-m02 has CIDR [10.244.1.0/24] 
	I0731 20:04:06.987079       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0731 20:04:06.987192       1 main.go:322] Node ha-235073-m04 has CIDR [10.244.3.0/24] 
	I0731 20:04:06.987298       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0731 20:04:06.987323       1 main.go:299] handling current node
	I0731 20:04:16.995450       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0731 20:04:16.995574       1 main.go:299] handling current node
	I0731 20:04:16.995620       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0731 20:04:16.995626       1 main.go:322] Node ha-235073-m02 has CIDR [10.244.1.0/24] 
	I0731 20:04:16.995910       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0731 20:04:16.995937       1 main.go:322] Node ha-235073-m04 has CIDR [10.244.3.0/24] 
	I0731 20:04:26.987081       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0731 20:04:26.987290       1 main.go:322] Node ha-235073-m04 has CIDR [10.244.3.0/24] 
	I0731 20:04:26.987461       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0731 20:04:26.987488       1 main.go:299] handling current node
	I0731 20:04:26.987511       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0731 20:04:26.987527       1 main.go:322] Node ha-235073-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [ee50c4b9e239496652640464181ee3dd4a9d21d5a5123d1d17fbb7a50e29dc1a] <==
	I0731 19:56:56.995972       1 main.go:299] handling current node
	I0731 19:56:56.995996       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0731 19:56:56.996015       1 main.go:322] Node ha-235073-m02 has CIDR [10.244.1.0/24] 
	E0731 19:57:03.652887       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1953&timeout=9m12s&timeoutSeconds=552&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""
	I0731 19:57:06.995587       1 main.go:295] Handling node with IPs: map[192.168.39.136:{}]
	I0731 19:57:06.995684       1 main.go:322] Node ha-235073-m03 has CIDR [10.244.2.0/24] 
	I0731 19:57:06.996005       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0731 19:57:06.996070       1 main.go:322] Node ha-235073-m04 has CIDR [10.244.3.0/24] 
	I0731 19:57:06.996207       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0731 19:57:06.996298       1 main.go:299] handling current node
	I0731 19:57:06.996326       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0731 19:57:06.996405       1 main.go:322] Node ha-235073-m02 has CIDR [10.244.1.0/24] 
	I0731 19:57:16.997505       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0731 19:57:16.997639       1 main.go:299] handling current node
	I0731 19:57:16.997678       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0731 19:57:16.997697       1 main.go:322] Node ha-235073-m02 has CIDR [10.244.1.0/24] 
	I0731 19:57:16.997919       1 main.go:295] Handling node with IPs: map[192.168.39.136:{}]
	I0731 19:57:16.997944       1 main.go:322] Node ha-235073-m03 has CIDR [10.244.2.0/24] 
	I0731 19:57:16.998073       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0731 19:57:16.998093       1 main.go:322] Node ha-235073-m04 has CIDR [10.244.3.0/24] 
	W0731 19:57:17.140171       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Unauthorized
	I0731 19:57:17.142624       1 trace.go:236] Trace[402570704]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232 (31-Jul-2024 19:57:04.686) (total time: 12454ms):
	Trace[402570704]: ---"Objects listed" error:Unauthorized 12453ms (19:57:17.140)
	Trace[402570704]: [12.454266018s] [12.454266018s] END
	E0731 19:57:17.143324       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Unauthorized
	
	
	==> kube-apiserver [af36b29bca740794d7c0b4e50678dfd727788c6c5af5ef49b306441037b9027c] <==
	I0731 19:59:06.416170       1 options.go:221] external host was not specified, using 192.168.39.146
	I0731 19:59:06.417136       1 server.go:148] Version: v1.30.3
	I0731 19:59:06.417484       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 19:59:06.978032       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0731 19:59:06.981207       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 19:59:06.982029       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0731 19:59:06.982163       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0731 19:59:06.983462       1 instance.go:299] Using reconciler: lease
	W0731 19:59:26.971960       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0731 19:59:26.971959       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0731 19:59:26.985169       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [b7502ddf2c06deb62269b97a51c20850ac0228229029f4bf9f8ef9523e50ec52] <==
	I0731 19:59:47.582269       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0731 19:59:47.682691       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 19:59:47.688817       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0731 19:59:47.712204       1 shared_informer.go:320] Caches are synced for configmaps
	I0731 19:59:47.712313       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0731 19:59:47.712319       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0731 19:59:47.712428       1 aggregator.go:165] initial CRD sync complete...
	I0731 19:59:47.712488       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 19:59:47.712331       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0731 19:59:47.712546       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0731 19:59:47.712282       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 19:59:47.712517       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 19:59:47.713152       1 cache.go:39] Caches are synced for autoregister controller
	I0731 19:59:47.742508       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 19:59:47.752649       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 19:59:47.752700       1 policy_source.go:224] refreshing policies
	W0731 19:59:47.775522       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.136]
	I0731 19:59:47.777306       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 19:59:47.787388       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 19:59:47.788552       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0731 19:59:47.796628       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0731 19:59:48.601467       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0731 19:59:49.016260       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.136 192.168.39.146]
	W0731 19:59:59.018199       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.146]
	W0731 20:02:09.023909       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.146]
	
	
	==> kube-controller-manager [7b5519732046627c4a96cdc2e2575d18c859b61afc81a835def1808fcdfb47a5] <==
	I0731 19:59:06.698061       1 serving.go:380] Generated self-signed cert in-memory
	I0731 19:59:07.179958       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0731 19:59:07.180079       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 19:59:07.184815       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0731 19:59:07.185245       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0731 19:59:07.186030       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 19:59:07.186225       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0731 19:59:27.991868       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.146:8443/healthz\": dial tcp 192.168.39.146:8443: connect: connection refused"
	
	
	==> kube-controller-manager [f60d3b03e3fca1412fdfe4a1336d714af079600794b5d69b97e45212778ac386] <==
	I0731 20:01:54.759349       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="96.825392ms"
	I0731 20:01:54.780363       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.944556ms"
	I0731 20:01:54.780650       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.39µs"
	I0731 20:01:54.827489       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.308212ms"
	I0731 20:01:54.827984       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="103.549µs"
	I0731 20:01:56.715054       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="188.97µs"
	I0731 20:01:57.302429       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.009µs"
	I0731 20:01:57.328389       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.549µs"
	I0731 20:01:57.332588       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.069µs"
	I0731 20:01:58.509624       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.256687ms"
	I0731 20:01:58.510414       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="106.069µs"
	I0731 20:02:09.397239       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-235073-m04"
	E0731 20:02:09.433634       1 garbagecollector.go:399] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"coordination.k8s.io/v1", Kind:"Lease", Name:"ha-235073-m03", UID:"0266ad25-45f1-4594-98b2-8141b69314dd", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:"kube-node-lease"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerW
ait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Node", Name:"ha-235073-m03", UID:"6c53ce53-9993-4b7d-b499-f77f0c446884", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: leases.coordination.k8s.io "ha-235073-m03" not found
	E0731 20:02:20.133360       1 gc_controller.go:153] "Failed to get node" err="node \"ha-235073-m03\" not found" logger="pod-garbage-collector-controller" node="ha-235073-m03"
	E0731 20:02:20.133526       1 gc_controller.go:153] "Failed to get node" err="node \"ha-235073-m03\" not found" logger="pod-garbage-collector-controller" node="ha-235073-m03"
	E0731 20:02:20.133553       1 gc_controller.go:153] "Failed to get node" err="node \"ha-235073-m03\" not found" logger="pod-garbage-collector-controller" node="ha-235073-m03"
	E0731 20:02:20.133577       1 gc_controller.go:153] "Failed to get node" err="node \"ha-235073-m03\" not found" logger="pod-garbage-collector-controller" node="ha-235073-m03"
	E0731 20:02:20.133616       1 gc_controller.go:153] "Failed to get node" err="node \"ha-235073-m03\" not found" logger="pod-garbage-collector-controller" node="ha-235073-m03"
	E0731 20:02:40.134484       1 gc_controller.go:153] "Failed to get node" err="node \"ha-235073-m03\" not found" logger="pod-garbage-collector-controller" node="ha-235073-m03"
	E0731 20:02:40.134550       1 gc_controller.go:153] "Failed to get node" err="node \"ha-235073-m03\" not found" logger="pod-garbage-collector-controller" node="ha-235073-m03"
	E0731 20:02:40.134566       1 gc_controller.go:153] "Failed to get node" err="node \"ha-235073-m03\" not found" logger="pod-garbage-collector-controller" node="ha-235073-m03"
	E0731 20:02:40.134575       1 gc_controller.go:153] "Failed to get node" err="node \"ha-235073-m03\" not found" logger="pod-garbage-collector-controller" node="ha-235073-m03"
	E0731 20:02:40.134583       1 gc_controller.go:153] "Failed to get node" err="node \"ha-235073-m03\" not found" logger="pod-garbage-collector-controller" node="ha-235073-m03"
	I0731 20:02:45.270010       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.637697ms"
	I0731 20:02:45.270096       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.469µs"
	
	
	==> kube-proxy [8811952c6253860c880f1b6a403fc858c3e6399e99e9a825cf0b05f791ebf3ac] <==
	E0731 19:56:10.212599       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-235073&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 19:56:13.286345       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1973": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 19:56:13.286411       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1973": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 19:56:16.357062       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1999": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 19:56:16.358533       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1999": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 19:56:16.358373       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-235073&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 19:56:16.358623       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-235073&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 19:56:19.429934       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1973": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 19:56:19.430064       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1973": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 19:56:28.645824       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-235073&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 19:56:28.645951       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-235073&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 19:56:31.716794       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1999": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 19:56:31.716900       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1999": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 19:56:31.717095       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1973": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 19:56:31.717197       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1973": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 19:56:44.005525       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-235073&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 19:56:44.005722       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-235073&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 19:56:50.154358       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1999": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 19:56:50.154442       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1999": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 19:56:53.221764       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1973": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 19:56:53.221905       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1973": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 19:57:14.725300       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-235073&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 19:57:14.726196       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-235073&resourceVersion=1953": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 19:57:17.797495       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1999": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 19:57:17.797988       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1999": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [e7e5af865d5da458640b3360a4da109eb53e95c35c3b5a12f9446af71c28680c] <==
	I0731 19:59:07.287211       1 server_linux.go:69] "Using iptables proxy"
	E0731 19:59:08.388761       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-235073\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0731 19:59:11.460671       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-235073\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0731 19:59:14.533234       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-235073\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0731 19:59:20.677036       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-235073\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0731 19:59:29.892821       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-235073\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0731 19:59:48.324853       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-235073\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0731 19:59:48.325350       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0731 19:59:48.452248       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 19:59:48.452343       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 19:59:48.452476       1 server_linux.go:165] "Using iptables Proxier"
	I0731 19:59:48.504982       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 19:59:48.507505       1 server.go:872] "Version info" version="v1.30.3"
	I0731 19:59:48.508316       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 19:59:48.510702       1 config.go:192] "Starting service config controller"
	I0731 19:59:48.510810       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 19:59:48.510933       1 config.go:101] "Starting endpoint slice config controller"
	I0731 19:59:48.511046       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 19:59:48.513336       1 config.go:319] "Starting node config controller"
	I0731 19:59:48.513449       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 19:59:48.611616       1 shared_informer.go:320] Caches are synced for service config
	I0731 19:59:48.614936       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 19:59:48.616443       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [216984c6b7d59f01f104316e0505bcc5baf47b8d8b5b230f6641c7dd73533498] <==
	W0731 19:57:10.355237       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 19:57:10.355284       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 19:57:10.577520       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 19:57:10.577567       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 19:57:10.757472       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 19:57:10.757516       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 19:57:10.885027       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 19:57:10.885185       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 19:57:10.903286       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 19:57:10.903362       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 19:57:10.936730       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 19:57:10.936816       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 19:57:10.971930       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 19:57:10.972014       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 19:57:11.050337       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 19:57:11.050384       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 19:57:11.326070       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 19:57:11.326153       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 19:57:11.661014       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 19:57:11.661150       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 19:57:12.640552       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 19:57:12.640641       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 19:57:16.776935       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 19:57:16.777030       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 19:57:19.055823       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [787234b628452985dd01b9eeae1a07be3f75c788f421c79acb1dc55a4f0cb1bd] <==
	W0731 19:59:38.045574       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.146:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.146:8443: connect: connection refused
	E0731 19:59:38.045614       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.146:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.146:8443: connect: connection refused
	W0731 19:59:41.069823       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.146:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.146:8443: connect: connection refused
	E0731 19:59:41.069905       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.146:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.146:8443: connect: connection refused
	W0731 19:59:43.079183       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.146:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.146:8443: connect: connection refused
	E0731 19:59:43.079323       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.146:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.146:8443: connect: connection refused
	W0731 19:59:43.520770       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.146:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.146:8443: connect: connection refused
	E0731 19:59:43.520898       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.146:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.146:8443: connect: connection refused
	W0731 19:59:43.642851       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.146:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.146:8443: connect: connection refused
	E0731 19:59:43.642912       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.146:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.146:8443: connect: connection refused
	W0731 19:59:45.218953       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.146:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.146:8443: connect: connection refused
	E0731 19:59:45.219029       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.146:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.146:8443: connect: connection refused
	W0731 19:59:47.621620       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 19:59:47.621672       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 19:59:47.621798       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 19:59:47.621827       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 19:59:47.621871       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 19:59:47.621914       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 19:59:47.622150       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 19:59:47.622213       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 19:59:47.625601       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 19:59:47.625642       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 19:59:47.643740       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 19:59:47.644048       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0731 20:00:02.797582       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 20:00:35 ha-235073 kubelet[1388]: E0731 20:00:35.819964    1388 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9cd9bb70-badc-4b4b-a135-62644edac7dd)\"" pod="kube-system/storage-provisioner" podUID="9cd9bb70-badc-4b4b-a135-62644edac7dd"
	Jul 31 20:00:37 ha-235073 kubelet[1388]: E0731 20:00:37.844781    1388 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 20:00:37 ha-235073 kubelet[1388]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 20:00:37 ha-235073 kubelet[1388]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 20:00:37 ha-235073 kubelet[1388]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 20:00:37 ha-235073 kubelet[1388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 20:00:46 ha-235073 kubelet[1388]: I0731 20:00:46.820462    1388 scope.go:117] "RemoveContainer" containerID="791841acf25442adb2adc892c7cd5548bb63d8bfccaaebf860d004aee02b6080"
	Jul 31 20:00:52 ha-235073 kubelet[1388]: I0731 20:00:52.819881    1388 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-235073" podUID="f28e113e-7c11-4a00-a8cb-fb5527042343"
	Jul 31 20:00:52 ha-235073 kubelet[1388]: I0731 20:00:52.843806    1388 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-235073"
	Jul 31 20:00:57 ha-235073 kubelet[1388]: I0731 20:00:57.984403    1388 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-235073" podStartSLOduration=5.984380052 podStartE2EDuration="5.984380052s" podCreationTimestamp="2024-07-31 20:00:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-31 20:00:57.981749862 +0000 UTC m=+860.316126627" watchObservedRunningTime="2024-07-31 20:00:57.984380052 +0000 UTC m=+860.318756813"
	Jul 31 20:01:37 ha-235073 kubelet[1388]: E0731 20:01:37.845185    1388 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 20:01:37 ha-235073 kubelet[1388]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 20:01:37 ha-235073 kubelet[1388]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 20:01:37 ha-235073 kubelet[1388]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 20:01:37 ha-235073 kubelet[1388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 20:02:37 ha-235073 kubelet[1388]: E0731 20:02:37.840347    1388 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 20:02:37 ha-235073 kubelet[1388]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 20:02:37 ha-235073 kubelet[1388]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 20:02:37 ha-235073 kubelet[1388]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 20:02:37 ha-235073 kubelet[1388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 20:03:37 ha-235073 kubelet[1388]: E0731 20:03:37.846179    1388 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 20:03:37 ha-235073 kubelet[1388]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 20:03:37 ha-235073 kubelet[1388]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 20:03:37 ha-235073 kubelet[1388]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 20:03:37 ha-235073 kubelet[1388]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 20:04:31.276187  148847 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19355-121704/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-235073 -n ha-235073
helpers_test.go:261: (dbg) Run:  kubectl --context ha-235073 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (332.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-094885
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-094885
E0731 20:20:09.826192  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-094885: exit status 82 (2m1.860773566s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-094885-m03"  ...
	* Stopping node "multinode-094885-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-094885" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-094885 --wait=true -v=8 --alsologtostderr
E0731 20:22:34.580741  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
E0731 20:23:12.871642  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
E0731 20:25:09.824768  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-094885 --wait=true -v=8 --alsologtostderr: (3m27.951788808s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-094885
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-094885 -n multinode-094885
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-094885 logs -n 25: (1.489025618s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-094885 ssh -n                                                                 | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | multinode-094885-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-094885 cp multinode-094885-m02:/home/docker/cp-test.txt                       | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4009504673/001/cp-test_multinode-094885-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-094885 ssh -n                                                                 | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | multinode-094885-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-094885 cp multinode-094885-m02:/home/docker/cp-test.txt                       | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | multinode-094885:/home/docker/cp-test_multinode-094885-m02_multinode-094885.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-094885 ssh -n                                                                 | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | multinode-094885-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-094885 ssh -n multinode-094885 sudo cat                                       | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | /home/docker/cp-test_multinode-094885-m02_multinode-094885.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-094885 cp multinode-094885-m02:/home/docker/cp-test.txt                       | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | multinode-094885-m03:/home/docker/cp-test_multinode-094885-m02_multinode-094885-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-094885 ssh -n                                                                 | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | multinode-094885-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-094885 ssh -n multinode-094885-m03 sudo cat                                   | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | /home/docker/cp-test_multinode-094885-m02_multinode-094885-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-094885 cp testdata/cp-test.txt                                                | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | multinode-094885-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-094885 ssh -n                                                                 | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | multinode-094885-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-094885 cp multinode-094885-m03:/home/docker/cp-test.txt                       | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4009504673/001/cp-test_multinode-094885-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-094885 ssh -n                                                                 | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | multinode-094885-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-094885 cp multinode-094885-m03:/home/docker/cp-test.txt                       | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | multinode-094885:/home/docker/cp-test_multinode-094885-m03_multinode-094885.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-094885 ssh -n                                                                 | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | multinode-094885-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-094885 ssh -n multinode-094885 sudo cat                                       | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | /home/docker/cp-test_multinode-094885-m03_multinode-094885.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-094885 cp multinode-094885-m03:/home/docker/cp-test.txt                       | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | multinode-094885-m02:/home/docker/cp-test_multinode-094885-m03_multinode-094885-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-094885 ssh -n                                                                 | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | multinode-094885-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-094885 ssh -n multinode-094885-m02 sudo cat                                   | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | /home/docker/cp-test_multinode-094885-m03_multinode-094885-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-094885 node stop m03                                                          | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	| node    | multinode-094885 node start                                                             | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-094885                                                                | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC |                     |
	| stop    | -p multinode-094885                                                                     | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC |                     |
	| start   | -p multinode-094885                                                                     | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:21 UTC | 31 Jul 24 20:25 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-094885                                                                | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:25 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 20:21:49
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 20:21:49.814033  158660 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:21:49.814280  158660 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:21:49.814289  158660 out.go:304] Setting ErrFile to fd 2...
	I0731 20:21:49.814293  158660 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:21:49.814488  158660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 20:21:49.815055  158660 out.go:298] Setting JSON to false
	I0731 20:21:49.815994  158660 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":7446,"bootTime":1722449864,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 20:21:49.816066  158660 start.go:139] virtualization: kvm guest
	I0731 20:21:49.818471  158660 out.go:177] * [multinode-094885] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 20:21:49.820045  158660 notify.go:220] Checking for updates...
	I0731 20:21:49.820053  158660 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 20:21:49.821356  158660 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 20:21:49.822690  158660 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 20:21:49.823849  158660 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 20:21:49.825020  158660 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 20:21:49.826191  158660 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 20:21:49.827891  158660 config.go:182] Loaded profile config "multinode-094885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:21:49.827974  158660 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 20:21:49.828361  158660 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:21:49.828418  158660 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:21:49.843387  158660 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
	I0731 20:21:49.843798  158660 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:21:49.844453  158660 main.go:141] libmachine: Using API Version  1
	I0731 20:21:49.844482  158660 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:21:49.844822  158660 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:21:49.845021  158660 main.go:141] libmachine: (multinode-094885) Calling .DriverName
	I0731 20:21:49.880438  158660 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 20:21:49.881682  158660 start.go:297] selected driver: kvm2
	I0731 20:21:49.881696  158660 start.go:901] validating driver "kvm2" against &{Name:multinode-094885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-094885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.53 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:21:49.881849  158660 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 20:21:49.882163  158660 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:21:49.882231  158660 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-121704/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 20:21:49.897771  158660 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 20:21:49.898448  158660 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 20:21:49.898476  158660 cni.go:84] Creating CNI manager for ""
	I0731 20:21:49.898484  158660 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0731 20:21:49.898549  158660 start.go:340] cluster config:
	{Name:multinode-094885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-094885 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.53 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:21:49.898701  158660 iso.go:125] acquiring lock: {Name:mk69fc0fe37180b19ce91307b5e12ab2d0bd69fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:21:49.901238  158660 out.go:177] * Starting "multinode-094885" primary control-plane node in "multinode-094885" cluster
	I0731 20:21:49.902506  158660 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:21:49.902536  158660 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 20:21:49.902543  158660 cache.go:56] Caching tarball of preloaded images
	I0731 20:21:49.902639  158660 preload.go:172] Found /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 20:21:49.902651  158660 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 20:21:49.902774  158660 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/multinode-094885/config.json ...
	I0731 20:21:49.902963  158660 start.go:360] acquireMachinesLock for multinode-094885: {Name:mk0ee20c9dba367bb5e62f6affdfe6f589095d2a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 20:21:49.903007  158660 start.go:364] duration metric: took 25.586µs to acquireMachinesLock for "multinode-094885"
	I0731 20:21:49.903026  158660 start.go:96] Skipping create...Using existing machine configuration
	I0731 20:21:49.903035  158660 fix.go:54] fixHost starting: 
	I0731 20:21:49.903282  158660 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:21:49.903316  158660 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:21:49.917918  158660 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40049
	I0731 20:21:49.918370  158660 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:21:49.918983  158660 main.go:141] libmachine: Using API Version  1
	I0731 20:21:49.919008  158660 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:21:49.919375  158660 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:21:49.919579  158660 main.go:141] libmachine: (multinode-094885) Calling .DriverName
	I0731 20:21:49.919747  158660 main.go:141] libmachine: (multinode-094885) Calling .GetState
	I0731 20:21:49.921359  158660 fix.go:112] recreateIfNeeded on multinode-094885: state=Running err=<nil>
	W0731 20:21:49.921379  158660 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 20:21:49.923232  158660 out.go:177] * Updating the running kvm2 "multinode-094885" VM ...
	I0731 20:21:49.924449  158660 machine.go:94] provisionDockerMachine start ...
	I0731 20:21:49.924469  158660 main.go:141] libmachine: (multinode-094885) Calling .DriverName
	I0731 20:21:49.924674  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHHostname
	I0731 20:21:49.926903  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:21:49.927345  158660 main.go:141] libmachine: (multinode-094885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:94:53", ip: ""} in network mk-multinode-094885: {Iface:virbr1 ExpiryTime:2024-07-31 21:16:07 +0000 UTC Type:0 Mac:52:54:00:32:94:53 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-094885 Clientid:01:52:54:00:32:94:53}
	I0731 20:21:49.927373  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined IP address 192.168.39.193 and MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:21:49.927538  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHPort
	I0731 20:21:49.927716  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHKeyPath
	I0731 20:21:49.927878  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHKeyPath
	I0731 20:21:49.927991  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHUsername
	I0731 20:21:49.928131  158660 main.go:141] libmachine: Using SSH client type: native
	I0731 20:21:49.928336  158660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I0731 20:21:49.928350  158660 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 20:21:50.038338  158660 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-094885
	
	I0731 20:21:50.038385  158660 main.go:141] libmachine: (multinode-094885) Calling .GetMachineName
	I0731 20:21:50.038742  158660 buildroot.go:166] provisioning hostname "multinode-094885"
	I0731 20:21:50.038772  158660 main.go:141] libmachine: (multinode-094885) Calling .GetMachineName
	I0731 20:21:50.038950  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHHostname
	I0731 20:21:50.041667  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:21:50.042035  158660 main.go:141] libmachine: (multinode-094885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:94:53", ip: ""} in network mk-multinode-094885: {Iface:virbr1 ExpiryTime:2024-07-31 21:16:07 +0000 UTC Type:0 Mac:52:54:00:32:94:53 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-094885 Clientid:01:52:54:00:32:94:53}
	I0731 20:21:50.042056  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined IP address 192.168.39.193 and MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:21:50.042137  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHPort
	I0731 20:21:50.042315  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHKeyPath
	I0731 20:21:50.042482  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHKeyPath
	I0731 20:21:50.042599  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHUsername
	I0731 20:21:50.042769  158660 main.go:141] libmachine: Using SSH client type: native
	I0731 20:21:50.042939  158660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I0731 20:21:50.042954  158660 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-094885 && echo "multinode-094885" | sudo tee /etc/hostname
	I0731 20:21:50.170804  158660 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-094885
	
	I0731 20:21:50.170837  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHHostname
	I0731 20:21:50.173957  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:21:50.174425  158660 main.go:141] libmachine: (multinode-094885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:94:53", ip: ""} in network mk-multinode-094885: {Iface:virbr1 ExpiryTime:2024-07-31 21:16:07 +0000 UTC Type:0 Mac:52:54:00:32:94:53 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-094885 Clientid:01:52:54:00:32:94:53}
	I0731 20:21:50.174455  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined IP address 192.168.39.193 and MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:21:50.174648  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHPort
	I0731 20:21:50.174864  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHKeyPath
	I0731 20:21:50.175045  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHKeyPath
	I0731 20:21:50.175241  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHUsername
	I0731 20:21:50.175449  158660 main.go:141] libmachine: Using SSH client type: native
	I0731 20:21:50.175645  158660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I0731 20:21:50.175672  158660 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-094885' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-094885/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-094885' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:21:50.282597  158660 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:21:50.282627  158660 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 20:21:50.282663  158660 buildroot.go:174] setting up certificates
	I0731 20:21:50.282674  158660 provision.go:84] configureAuth start
	I0731 20:21:50.282686  158660 main.go:141] libmachine: (multinode-094885) Calling .GetMachineName
	I0731 20:21:50.282924  158660 main.go:141] libmachine: (multinode-094885) Calling .GetIP
	I0731 20:21:50.285203  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:21:50.285631  158660 main.go:141] libmachine: (multinode-094885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:94:53", ip: ""} in network mk-multinode-094885: {Iface:virbr1 ExpiryTime:2024-07-31 21:16:07 +0000 UTC Type:0 Mac:52:54:00:32:94:53 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-094885 Clientid:01:52:54:00:32:94:53}
	I0731 20:21:50.285660  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined IP address 192.168.39.193 and MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:21:50.285826  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHHostname
	I0731 20:21:50.288046  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:21:50.288341  158660 main.go:141] libmachine: (multinode-094885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:94:53", ip: ""} in network mk-multinode-094885: {Iface:virbr1 ExpiryTime:2024-07-31 21:16:07 +0000 UTC Type:0 Mac:52:54:00:32:94:53 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-094885 Clientid:01:52:54:00:32:94:53}
	I0731 20:21:50.288365  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined IP address 192.168.39.193 and MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:21:50.288469  158660 provision.go:143] copyHostCerts
	I0731 20:21:50.288500  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 20:21:50.288532  158660 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 20:21:50.288540  158660 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 20:21:50.288608  158660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 20:21:50.288703  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 20:21:50.288717  158660 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 20:21:50.288721  158660 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 20:21:50.288751  158660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 20:21:50.288814  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 20:21:50.288831  158660 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 20:21:50.288835  158660 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 20:21:50.288858  158660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 20:21:50.288915  158660 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.multinode-094885 san=[127.0.0.1 192.168.39.193 localhost minikube multinode-094885]
	I0731 20:21:50.396756  158660 provision.go:177] copyRemoteCerts
	I0731 20:21:50.396818  158660 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:21:50.396843  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHHostname
	I0731 20:21:50.399576  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:21:50.399905  158660 main.go:141] libmachine: (multinode-094885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:94:53", ip: ""} in network mk-multinode-094885: {Iface:virbr1 ExpiryTime:2024-07-31 21:16:07 +0000 UTC Type:0 Mac:52:54:00:32:94:53 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-094885 Clientid:01:52:54:00:32:94:53}
	I0731 20:21:50.399927  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined IP address 192.168.39.193 and MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:21:50.400166  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHPort
	I0731 20:21:50.400366  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHKeyPath
	I0731 20:21:50.400668  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHUsername
	I0731 20:21:50.400786  158660 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/multinode-094885/id_rsa Username:docker}
	I0731 20:21:50.484539  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 20:21:50.484637  158660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:21:50.510614  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 20:21:50.510719  158660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0731 20:21:50.536136  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 20:21:50.536204  158660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 20:21:50.561748  158660 provision.go:87] duration metric: took 279.058934ms to configureAuth
	I0731 20:21:50.561781  158660 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:21:50.562015  158660 config.go:182] Loaded profile config "multinode-094885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:21:50.562088  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHHostname
	I0731 20:21:50.564877  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:21:50.565265  158660 main.go:141] libmachine: (multinode-094885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:94:53", ip: ""} in network mk-multinode-094885: {Iface:virbr1 ExpiryTime:2024-07-31 21:16:07 +0000 UTC Type:0 Mac:52:54:00:32:94:53 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-094885 Clientid:01:52:54:00:32:94:53}
	I0731 20:21:50.565290  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined IP address 192.168.39.193 and MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:21:50.565493  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHPort
	I0731 20:21:50.565716  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHKeyPath
	I0731 20:21:50.565862  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHKeyPath
	I0731 20:21:50.565985  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHUsername
	I0731 20:21:50.566106  158660 main.go:141] libmachine: Using SSH client type: native
	I0731 20:21:50.566373  158660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I0731 20:21:50.566396  158660 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:23:21.444243  158660 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:23:21.444286  158660 machine.go:97] duration metric: took 1m31.519817576s to provisionDockerMachine
	I0731 20:23:21.444301  158660 start.go:293] postStartSetup for "multinode-094885" (driver="kvm2")
	I0731 20:23:21.444317  158660 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:23:21.444337  158660 main.go:141] libmachine: (multinode-094885) Calling .DriverName
	I0731 20:23:21.444741  158660 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:23:21.444780  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHHostname
	I0731 20:23:21.448177  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:23:21.448611  158660 main.go:141] libmachine: (multinode-094885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:94:53", ip: ""} in network mk-multinode-094885: {Iface:virbr1 ExpiryTime:2024-07-31 21:16:07 +0000 UTC Type:0 Mac:52:54:00:32:94:53 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-094885 Clientid:01:52:54:00:32:94:53}
	I0731 20:23:21.448656  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined IP address 192.168.39.193 and MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:23:21.448939  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHPort
	I0731 20:23:21.449156  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHKeyPath
	I0731 20:23:21.449325  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHUsername
	I0731 20:23:21.449493  158660 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/multinode-094885/id_rsa Username:docker}
	I0731 20:23:21.537363  158660 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:23:21.541299  158660 command_runner.go:130] > NAME=Buildroot
	I0731 20:23:21.541316  158660 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0731 20:23:21.541321  158660 command_runner.go:130] > ID=buildroot
	I0731 20:23:21.541327  158660 command_runner.go:130] > VERSION_ID=2023.02.9
	I0731 20:23:21.541348  158660 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0731 20:23:21.541432  158660 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:23:21.541454  158660 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 20:23:21.541525  158660 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 20:23:21.541618  158660 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 20:23:21.541630  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> /etc/ssl/certs/1288912.pem
	I0731 20:23:21.541737  158660 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:23:21.551523  158660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:23:21.576695  158660 start.go:296] duration metric: took 132.364162ms for postStartSetup
	I0731 20:23:21.576751  158660 fix.go:56] duration metric: took 1m31.673715161s for fixHost
	I0731 20:23:21.576790  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHHostname
	I0731 20:23:21.579534  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:23:21.579887  158660 main.go:141] libmachine: (multinode-094885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:94:53", ip: ""} in network mk-multinode-094885: {Iface:virbr1 ExpiryTime:2024-07-31 21:16:07 +0000 UTC Type:0 Mac:52:54:00:32:94:53 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-094885 Clientid:01:52:54:00:32:94:53}
	I0731 20:23:21.579916  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined IP address 192.168.39.193 and MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:23:21.580103  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHPort
	I0731 20:23:21.580325  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHKeyPath
	I0731 20:23:21.580525  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHKeyPath
	I0731 20:23:21.580651  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHUsername
	I0731 20:23:21.580817  158660 main.go:141] libmachine: Using SSH client type: native
	I0731 20:23:21.581030  158660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I0731 20:23:21.581042  158660 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:23:21.686369  158660 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722457401.657112561
	
	I0731 20:23:21.686396  158660 fix.go:216] guest clock: 1722457401.657112561
	I0731 20:23:21.686408  158660 fix.go:229] Guest: 2024-07-31 20:23:21.657112561 +0000 UTC Remote: 2024-07-31 20:23:21.576756777 +0000 UTC m=+91.798841457 (delta=80.355784ms)
	I0731 20:23:21.686444  158660 fix.go:200] guest clock delta is within tolerance: 80.355784ms
	I0731 20:23:21.686454  158660 start.go:83] releasing machines lock for "multinode-094885", held for 1m31.783436589s
	I0731 20:23:21.686477  158660 main.go:141] libmachine: (multinode-094885) Calling .DriverName
	I0731 20:23:21.686759  158660 main.go:141] libmachine: (multinode-094885) Calling .GetIP
	I0731 20:23:21.689632  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:23:21.689969  158660 main.go:141] libmachine: (multinode-094885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:94:53", ip: ""} in network mk-multinode-094885: {Iface:virbr1 ExpiryTime:2024-07-31 21:16:07 +0000 UTC Type:0 Mac:52:54:00:32:94:53 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-094885 Clientid:01:52:54:00:32:94:53}
	I0731 20:23:21.689996  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined IP address 192.168.39.193 and MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:23:21.690156  158660 main.go:141] libmachine: (multinode-094885) Calling .DriverName
	I0731 20:23:21.690684  158660 main.go:141] libmachine: (multinode-094885) Calling .DriverName
	I0731 20:23:21.690868  158660 main.go:141] libmachine: (multinode-094885) Calling .DriverName
	I0731 20:23:21.690928  158660 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:23:21.690982  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHHostname
	I0731 20:23:21.691108  158660 ssh_runner.go:195] Run: cat /version.json
	I0731 20:23:21.691131  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHHostname
	I0731 20:23:21.693639  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:23:21.693708  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:23:21.694046  158660 main.go:141] libmachine: (multinode-094885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:94:53", ip: ""} in network mk-multinode-094885: {Iface:virbr1 ExpiryTime:2024-07-31 21:16:07 +0000 UTC Type:0 Mac:52:54:00:32:94:53 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-094885 Clientid:01:52:54:00:32:94:53}
	I0731 20:23:21.694072  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined IP address 192.168.39.193 and MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:23:21.694109  158660 main.go:141] libmachine: (multinode-094885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:94:53", ip: ""} in network mk-multinode-094885: {Iface:virbr1 ExpiryTime:2024-07-31 21:16:07 +0000 UTC Type:0 Mac:52:54:00:32:94:53 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-094885 Clientid:01:52:54:00:32:94:53}
	I0731 20:23:21.694125  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined IP address 192.168.39.193 and MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:23:21.694192  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHPort
	I0731 20:23:21.694382  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHPort
	I0731 20:23:21.694538  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHKeyPath
	I0731 20:23:21.694611  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHKeyPath
	I0731 20:23:21.694776  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHUsername
	I0731 20:23:21.694798  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHUsername
	I0731 20:23:21.694949  158660 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/multinode-094885/id_rsa Username:docker}
	I0731 20:23:21.694978  158660 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/multinode-094885/id_rsa Username:docker}
	I0731 20:23:21.795496  158660 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0731 20:23:21.796250  158660 command_runner.go:130] > {"iso_version": "v1.33.1-1722420371-19355", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "7d72c3be84f92807e8ddb66796778c6727075dd6"}
	I0731 20:23:21.796413  158660 ssh_runner.go:195] Run: systemctl --version
	I0731 20:23:21.802371  158660 command_runner.go:130] > systemd 252 (252)
	I0731 20:23:21.802411  158660 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0731 20:23:21.802699  158660 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:23:21.960519  158660 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 20:23:21.967917  158660 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0731 20:23:21.967968  158660 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:23:21.968052  158660 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:23:21.977955  158660 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0731 20:23:21.977980  158660 start.go:495] detecting cgroup driver to use...
	I0731 20:23:21.978055  158660 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:23:21.995788  158660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:23:22.009808  158660 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:23:22.009870  158660 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:23:22.023908  158660 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:23:22.037488  158660 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:23:22.177446  158660 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:23:22.315615  158660 docker.go:233] disabling docker service ...
	I0731 20:23:22.315718  158660 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:23:22.332851  158660 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:23:22.347146  158660 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:23:22.481959  158660 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:23:22.644466  158660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:23:22.686640  158660 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:23:22.718678  158660 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0731 20:23:22.718730  158660 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 20:23:22.718799  158660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:23:22.732839  158660 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:23:22.732909  158660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:23:22.746446  158660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:23:22.761062  158660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:23:22.771509  158660 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:23:22.782157  158660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:23:22.797807  158660 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:23:22.811808  158660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:23:22.826195  158660 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:23:22.836590  158660 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0731 20:23:22.837108  158660 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:23:22.853689  158660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:23:23.020547  158660 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:23:33.251282  158660 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.230692017s)
	I0731 20:23:33.251317  158660 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:23:33.251418  158660 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:23:33.256512  158660 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0731 20:23:33.256541  158660 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0731 20:23:33.256551  158660 command_runner.go:130] > Device: 0,22	Inode: 1427        Links: 1
	I0731 20:23:33.256561  158660 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 20:23:33.256566  158660 command_runner.go:130] > Access: 2024-07-31 20:23:33.070275458 +0000
	I0731 20:23:33.256572  158660 command_runner.go:130] > Modify: 2024-07-31 20:23:33.070275458 +0000
	I0731 20:23:33.256577  158660 command_runner.go:130] > Change: 2024-07-31 20:23:33.070275458 +0000
	I0731 20:23:33.256581  158660 command_runner.go:130] >  Birth: -
	I0731 20:23:33.256727  158660 start.go:563] Will wait 60s for crictl version
	I0731 20:23:33.256792  158660 ssh_runner.go:195] Run: which crictl
	I0731 20:23:33.260739  158660 command_runner.go:130] > /usr/bin/crictl
	I0731 20:23:33.260814  158660 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:23:33.303435  158660 command_runner.go:130] > Version:  0.1.0
	I0731 20:23:33.303460  158660 command_runner.go:130] > RuntimeName:  cri-o
	I0731 20:23:33.303465  158660 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0731 20:23:33.303470  158660 command_runner.go:130] > RuntimeApiVersion:  v1
	I0731 20:23:33.304737  158660 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:23:33.304809  158660 ssh_runner.go:195] Run: crio --version
	I0731 20:23:33.333865  158660 command_runner.go:130] > crio version 1.29.1
	I0731 20:23:33.333891  158660 command_runner.go:130] > Version:        1.29.1
	I0731 20:23:33.333900  158660 command_runner.go:130] > GitCommit:      unknown
	I0731 20:23:33.333905  158660 command_runner.go:130] > GitCommitDate:  unknown
	I0731 20:23:33.333912  158660 command_runner.go:130] > GitTreeState:   clean
	I0731 20:23:33.333920  158660 command_runner.go:130] > BuildDate:      2024-07-31T15:55:08Z
	I0731 20:23:33.333927  158660 command_runner.go:130] > GoVersion:      go1.21.6
	I0731 20:23:33.333931  158660 command_runner.go:130] > Compiler:       gc
	I0731 20:23:33.333937  158660 command_runner.go:130] > Platform:       linux/amd64
	I0731 20:23:33.333942  158660 command_runner.go:130] > Linkmode:       dynamic
	I0731 20:23:33.333948  158660 command_runner.go:130] > BuildTags:      
	I0731 20:23:33.333954  158660 command_runner.go:130] >   containers_image_ostree_stub
	I0731 20:23:33.333960  158660 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0731 20:23:33.333967  158660 command_runner.go:130] >   btrfs_noversion
	I0731 20:23:33.333975  158660 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0731 20:23:33.333985  158660 command_runner.go:130] >   libdm_no_deferred_remove
	I0731 20:23:33.333992  158660 command_runner.go:130] >   seccomp
	I0731 20:23:33.334000  158660 command_runner.go:130] > LDFlags:          unknown
	I0731 20:23:33.334007  158660 command_runner.go:130] > SeccompEnabled:   true
	I0731 20:23:33.334034  158660 command_runner.go:130] > AppArmorEnabled:  false
	I0731 20:23:33.335129  158660 ssh_runner.go:195] Run: crio --version
	I0731 20:23:33.365542  158660 command_runner.go:130] > crio version 1.29.1
	I0731 20:23:33.365571  158660 command_runner.go:130] > Version:        1.29.1
	I0731 20:23:33.365579  158660 command_runner.go:130] > GitCommit:      unknown
	I0731 20:23:33.365586  158660 command_runner.go:130] > GitCommitDate:  unknown
	I0731 20:23:33.365591  158660 command_runner.go:130] > GitTreeState:   clean
	I0731 20:23:33.365598  158660 command_runner.go:130] > BuildDate:      2024-07-31T15:55:08Z
	I0731 20:23:33.365604  158660 command_runner.go:130] > GoVersion:      go1.21.6
	I0731 20:23:33.365610  158660 command_runner.go:130] > Compiler:       gc
	I0731 20:23:33.365617  158660 command_runner.go:130] > Platform:       linux/amd64
	I0731 20:23:33.365623  158660 command_runner.go:130] > Linkmode:       dynamic
	I0731 20:23:33.365630  158660 command_runner.go:130] > BuildTags:      
	I0731 20:23:33.365642  158660 command_runner.go:130] >   containers_image_ostree_stub
	I0731 20:23:33.365649  158660 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0731 20:23:33.365656  158660 command_runner.go:130] >   btrfs_noversion
	I0731 20:23:33.365661  158660 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0731 20:23:33.365665  158660 command_runner.go:130] >   libdm_no_deferred_remove
	I0731 20:23:33.365669  158660 command_runner.go:130] >   seccomp
	I0731 20:23:33.365673  158660 command_runner.go:130] > LDFlags:          unknown
	I0731 20:23:33.365679  158660 command_runner.go:130] > SeccompEnabled:   true
	I0731 20:23:33.365683  158660 command_runner.go:130] > AppArmorEnabled:  false
	I0731 20:23:33.367890  158660 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 20:23:33.369393  158660 main.go:141] libmachine: (multinode-094885) Calling .GetIP
	I0731 20:23:33.372195  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:23:33.372591  158660 main.go:141] libmachine: (multinode-094885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:94:53", ip: ""} in network mk-multinode-094885: {Iface:virbr1 ExpiryTime:2024-07-31 21:16:07 +0000 UTC Type:0 Mac:52:54:00:32:94:53 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-094885 Clientid:01:52:54:00:32:94:53}
	I0731 20:23:33.372610  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined IP address 192.168.39.193 and MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:23:33.372884  158660 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 20:23:33.377361  158660 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0731 20:23:33.377556  158660 kubeadm.go:883] updating cluster {Name:multinode-094885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-094885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.53 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:23:33.377733  158660 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:23:33.377798  158660 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:23:33.419994  158660 command_runner.go:130] > {
	I0731 20:23:33.420015  158660 command_runner.go:130] >   "images": [
	I0731 20:23:33.420021  158660 command_runner.go:130] >     {
	I0731 20:23:33.420042  158660 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0731 20:23:33.420049  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.420057  158660 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0731 20:23:33.420062  158660 command_runner.go:130] >       ],
	I0731 20:23:33.420067  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.420080  158660 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0731 20:23:33.420095  158660 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0731 20:23:33.420103  158660 command_runner.go:130] >       ],
	I0731 20:23:33.420111  158660 command_runner.go:130] >       "size": "87165492",
	I0731 20:23:33.420118  158660 command_runner.go:130] >       "uid": null,
	I0731 20:23:33.420125  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.420136  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.420143  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.420149  158660 command_runner.go:130] >     },
	I0731 20:23:33.420155  158660 command_runner.go:130] >     {
	I0731 20:23:33.420168  158660 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0731 20:23:33.420175  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.420184  158660 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0731 20:23:33.420191  158660 command_runner.go:130] >       ],
	I0731 20:23:33.420200  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.420212  158660 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0731 20:23:33.420230  158660 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0731 20:23:33.420238  158660 command_runner.go:130] >       ],
	I0731 20:23:33.420245  158660 command_runner.go:130] >       "size": "87174707",
	I0731 20:23:33.420255  158660 command_runner.go:130] >       "uid": null,
	I0731 20:23:33.420269  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.420278  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.420286  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.420294  158660 command_runner.go:130] >     },
	I0731 20:23:33.420301  158660 command_runner.go:130] >     {
	I0731 20:23:33.420315  158660 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0731 20:23:33.420324  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.420333  158660 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0731 20:23:33.420341  158660 command_runner.go:130] >       ],
	I0731 20:23:33.420349  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.420364  158660 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0731 20:23:33.420386  158660 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0731 20:23:33.420481  158660 command_runner.go:130] >       ],
	I0731 20:23:33.420505  158660 command_runner.go:130] >       "size": "1363676",
	I0731 20:23:33.420511  158660 command_runner.go:130] >       "uid": null,
	I0731 20:23:33.420519  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.420529  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.420538  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.420546  158660 command_runner.go:130] >     },
	I0731 20:23:33.420552  158660 command_runner.go:130] >     {
	I0731 20:23:33.420564  158660 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0731 20:23:33.420574  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.420586  158660 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0731 20:23:33.420594  158660 command_runner.go:130] >       ],
	I0731 20:23:33.420602  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.420619  158660 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0731 20:23:33.420667  158660 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0731 20:23:33.420676  158660 command_runner.go:130] >       ],
	I0731 20:23:33.420684  158660 command_runner.go:130] >       "size": "31470524",
	I0731 20:23:33.420691  158660 command_runner.go:130] >       "uid": null,
	I0731 20:23:33.420699  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.420708  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.420716  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.420725  158660 command_runner.go:130] >     },
	I0731 20:23:33.420732  158660 command_runner.go:130] >     {
	I0731 20:23:33.420745  158660 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0731 20:23:33.420754  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.420763  158660 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0731 20:23:33.420771  158660 command_runner.go:130] >       ],
	I0731 20:23:33.420778  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.420793  158660 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0731 20:23:33.420809  158660 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0731 20:23:33.420818  158660 command_runner.go:130] >       ],
	I0731 20:23:33.420826  158660 command_runner.go:130] >       "size": "61245718",
	I0731 20:23:33.420836  158660 command_runner.go:130] >       "uid": null,
	I0731 20:23:33.420847  158660 command_runner.go:130] >       "username": "nonroot",
	I0731 20:23:33.420856  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.420870  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.420878  158660 command_runner.go:130] >     },
	I0731 20:23:33.420894  158660 command_runner.go:130] >     {
	I0731 20:23:33.420905  158660 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0731 20:23:33.420917  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.420928  158660 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0731 20:23:33.420938  158660 command_runner.go:130] >       ],
	I0731 20:23:33.420945  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.420960  158660 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0731 20:23:33.420975  158660 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0731 20:23:33.420984  158660 command_runner.go:130] >       ],
	I0731 20:23:33.420993  158660 command_runner.go:130] >       "size": "150779692",
	I0731 20:23:33.421002  158660 command_runner.go:130] >       "uid": {
	I0731 20:23:33.421009  158660 command_runner.go:130] >         "value": "0"
	I0731 20:23:33.421017  158660 command_runner.go:130] >       },
	I0731 20:23:33.421024  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.421033  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.421041  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.421048  158660 command_runner.go:130] >     },
	I0731 20:23:33.421056  158660 command_runner.go:130] >     {
	I0731 20:23:33.421068  158660 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0731 20:23:33.421077  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.421088  158660 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0731 20:23:33.421096  158660 command_runner.go:130] >       ],
	I0731 20:23:33.421104  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.421117  158660 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0731 20:23:33.421132  158660 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0731 20:23:33.421141  158660 command_runner.go:130] >       ],
	I0731 20:23:33.421152  158660 command_runner.go:130] >       "size": "117609954",
	I0731 20:23:33.421160  158660 command_runner.go:130] >       "uid": {
	I0731 20:23:33.421168  158660 command_runner.go:130] >         "value": "0"
	I0731 20:23:33.421176  158660 command_runner.go:130] >       },
	I0731 20:23:33.421183  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.421192  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.421200  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.421209  158660 command_runner.go:130] >     },
	I0731 20:23:33.421222  158660 command_runner.go:130] >     {
	I0731 20:23:33.421236  158660 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0731 20:23:33.421245  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.421255  158660 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0731 20:23:33.421263  158660 command_runner.go:130] >       ],
	I0731 20:23:33.421270  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.421299  158660 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0731 20:23:33.421315  158660 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0731 20:23:33.421323  158660 command_runner.go:130] >       ],
	I0731 20:23:33.421332  158660 command_runner.go:130] >       "size": "112198984",
	I0731 20:23:33.421352  158660 command_runner.go:130] >       "uid": {
	I0731 20:23:33.421360  158660 command_runner.go:130] >         "value": "0"
	I0731 20:23:33.421365  158660 command_runner.go:130] >       },
	I0731 20:23:33.421370  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.421376  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.421383  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.421389  158660 command_runner.go:130] >     },
	I0731 20:23:33.421395  158660 command_runner.go:130] >     {
	I0731 20:23:33.421404  158660 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0731 20:23:33.421412  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.421420  158660 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0731 20:23:33.421426  158660 command_runner.go:130] >       ],
	I0731 20:23:33.421436  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.421449  158660 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0731 20:23:33.421463  158660 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0731 20:23:33.421472  158660 command_runner.go:130] >       ],
	I0731 20:23:33.421481  158660 command_runner.go:130] >       "size": "85953945",
	I0731 20:23:33.421490  158660 command_runner.go:130] >       "uid": null,
	I0731 20:23:33.421499  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.421509  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.421517  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.421523  158660 command_runner.go:130] >     },
	I0731 20:23:33.421528  158660 command_runner.go:130] >     {
	I0731 20:23:33.421540  158660 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0731 20:23:33.421549  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.421561  158660 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0731 20:23:33.421571  158660 command_runner.go:130] >       ],
	I0731 20:23:33.421580  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.421595  158660 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0731 20:23:33.421610  158660 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0731 20:23:33.421618  158660 command_runner.go:130] >       ],
	I0731 20:23:33.421626  158660 command_runner.go:130] >       "size": "63051080",
	I0731 20:23:33.421635  158660 command_runner.go:130] >       "uid": {
	I0731 20:23:33.421647  158660 command_runner.go:130] >         "value": "0"
	I0731 20:23:33.421655  158660 command_runner.go:130] >       },
	I0731 20:23:33.421663  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.421672  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.421681  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.421690  158660 command_runner.go:130] >     },
	I0731 20:23:33.421697  158660 command_runner.go:130] >     {
	I0731 20:23:33.421710  158660 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0731 20:23:33.421720  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.421731  158660 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0731 20:23:33.421740  158660 command_runner.go:130] >       ],
	I0731 20:23:33.421748  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.421762  158660 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0731 20:23:33.421777  158660 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0731 20:23:33.421786  158660 command_runner.go:130] >       ],
	I0731 20:23:33.421794  158660 command_runner.go:130] >       "size": "750414",
	I0731 20:23:33.421802  158660 command_runner.go:130] >       "uid": {
	I0731 20:23:33.421810  158660 command_runner.go:130] >         "value": "65535"
	I0731 20:23:33.421818  158660 command_runner.go:130] >       },
	I0731 20:23:33.421826  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.421834  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.421842  158660 command_runner.go:130] >       "pinned": true
	I0731 20:23:33.421850  158660 command_runner.go:130] >     }
	I0731 20:23:33.421856  158660 command_runner.go:130] >   ]
	I0731 20:23:33.421862  158660 command_runner.go:130] > }
	I0731 20:23:33.422054  158660 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 20:23:33.422067  158660 crio.go:433] Images already preloaded, skipping extraction
	I0731 20:23:33.422127  158660 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:23:33.457411  158660 command_runner.go:130] > {
	I0731 20:23:33.457437  158660 command_runner.go:130] >   "images": [
	I0731 20:23:33.457443  158660 command_runner.go:130] >     {
	I0731 20:23:33.457454  158660 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0731 20:23:33.457461  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.457469  158660 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0731 20:23:33.457474  158660 command_runner.go:130] >       ],
	I0731 20:23:33.457478  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.457489  158660 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0731 20:23:33.457499  158660 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0731 20:23:33.457505  158660 command_runner.go:130] >       ],
	I0731 20:23:33.457512  158660 command_runner.go:130] >       "size": "87165492",
	I0731 20:23:33.457519  158660 command_runner.go:130] >       "uid": null,
	I0731 20:23:33.457526  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.457548  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.457558  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.457564  158660 command_runner.go:130] >     },
	I0731 20:23:33.457569  158660 command_runner.go:130] >     {
	I0731 20:23:33.457578  158660 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0731 20:23:33.457586  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.457596  158660 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0731 20:23:33.457604  158660 command_runner.go:130] >       ],
	I0731 20:23:33.457611  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.457623  158660 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0731 20:23:33.457637  158660 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0731 20:23:33.457654  158660 command_runner.go:130] >       ],
	I0731 20:23:33.457663  158660 command_runner.go:130] >       "size": "87174707",
	I0731 20:23:33.457670  158660 command_runner.go:130] >       "uid": null,
	I0731 20:23:33.457684  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.457693  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.457701  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.457709  158660 command_runner.go:130] >     },
	I0731 20:23:33.457716  158660 command_runner.go:130] >     {
	I0731 20:23:33.457730  158660 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0731 20:23:33.457740  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.457749  158660 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0731 20:23:33.457758  158660 command_runner.go:130] >       ],
	I0731 20:23:33.457765  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.457780  158660 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0731 20:23:33.457796  158660 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0731 20:23:33.457804  158660 command_runner.go:130] >       ],
	I0731 20:23:33.457811  158660 command_runner.go:130] >       "size": "1363676",
	I0731 20:23:33.457820  158660 command_runner.go:130] >       "uid": null,
	I0731 20:23:33.457827  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.457836  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.457844  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.457853  158660 command_runner.go:130] >     },
	I0731 20:23:33.457861  158660 command_runner.go:130] >     {
	I0731 20:23:33.457872  158660 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0731 20:23:33.457881  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.457894  158660 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0731 20:23:33.457903  158660 command_runner.go:130] >       ],
	I0731 20:23:33.457910  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.457926  158660 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0731 20:23:33.457946  158660 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0731 20:23:33.457954  158660 command_runner.go:130] >       ],
	I0731 20:23:33.457962  158660 command_runner.go:130] >       "size": "31470524",
	I0731 20:23:33.457971  158660 command_runner.go:130] >       "uid": null,
	I0731 20:23:33.457980  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.457987  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.457998  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.458008  158660 command_runner.go:130] >     },
	I0731 20:23:33.458015  158660 command_runner.go:130] >     {
	I0731 20:23:33.458026  158660 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0731 20:23:33.458034  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.458044  158660 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0731 20:23:33.458052  158660 command_runner.go:130] >       ],
	I0731 20:23:33.458060  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.458075  158660 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0731 20:23:33.458090  158660 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0731 20:23:33.458098  158660 command_runner.go:130] >       ],
	I0731 20:23:33.458106  158660 command_runner.go:130] >       "size": "61245718",
	I0731 20:23:33.458116  158660 command_runner.go:130] >       "uid": null,
	I0731 20:23:33.458124  158660 command_runner.go:130] >       "username": "nonroot",
	I0731 20:23:33.458133  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.458141  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.458149  158660 command_runner.go:130] >     },
	I0731 20:23:33.458155  158660 command_runner.go:130] >     {
	I0731 20:23:33.458168  158660 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0731 20:23:33.458177  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.458187  158660 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0731 20:23:33.458195  158660 command_runner.go:130] >       ],
	I0731 20:23:33.458202  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.458215  158660 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0731 20:23:33.458229  158660 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0731 20:23:33.458237  158660 command_runner.go:130] >       ],
	I0731 20:23:33.458246  158660 command_runner.go:130] >       "size": "150779692",
	I0731 20:23:33.458256  158660 command_runner.go:130] >       "uid": {
	I0731 20:23:33.458264  158660 command_runner.go:130] >         "value": "0"
	I0731 20:23:33.458271  158660 command_runner.go:130] >       },
	I0731 20:23:33.458281  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.458290  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.458298  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.458304  158660 command_runner.go:130] >     },
	I0731 20:23:33.458314  158660 command_runner.go:130] >     {
	I0731 20:23:33.458328  158660 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0731 20:23:33.458338  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.458347  158660 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0731 20:23:33.458355  158660 command_runner.go:130] >       ],
	I0731 20:23:33.458364  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.458379  158660 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0731 20:23:33.458394  158660 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0731 20:23:33.458403  158660 command_runner.go:130] >       ],
	I0731 20:23:33.458410  158660 command_runner.go:130] >       "size": "117609954",
	I0731 20:23:33.458420  158660 command_runner.go:130] >       "uid": {
	I0731 20:23:33.458429  158660 command_runner.go:130] >         "value": "0"
	I0731 20:23:33.458437  158660 command_runner.go:130] >       },
	I0731 20:23:33.458444  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.458453  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.458461  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.458469  158660 command_runner.go:130] >     },
	I0731 20:23:33.458476  158660 command_runner.go:130] >     {
	I0731 20:23:33.458488  158660 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0731 20:23:33.458497  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.458506  158660 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0731 20:23:33.458514  158660 command_runner.go:130] >       ],
	I0731 20:23:33.458522  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.458546  158660 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0731 20:23:33.458560  158660 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0731 20:23:33.458566  158660 command_runner.go:130] >       ],
	I0731 20:23:33.458573  158660 command_runner.go:130] >       "size": "112198984",
	I0731 20:23:33.458584  158660 command_runner.go:130] >       "uid": {
	I0731 20:23:33.458594  158660 command_runner.go:130] >         "value": "0"
	I0731 20:23:33.458600  158660 command_runner.go:130] >       },
	I0731 20:23:33.458616  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.458627  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.458633  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.458639  158660 command_runner.go:130] >     },
	I0731 20:23:33.458648  158660 command_runner.go:130] >     {
	I0731 20:23:33.458655  158660 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0731 20:23:33.458662  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.458667  158660 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0731 20:23:33.458670  158660 command_runner.go:130] >       ],
	I0731 20:23:33.458675  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.458684  158660 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0731 20:23:33.458698  158660 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0731 20:23:33.458708  158660 command_runner.go:130] >       ],
	I0731 20:23:33.458715  158660 command_runner.go:130] >       "size": "85953945",
	I0731 20:23:33.458724  158660 command_runner.go:130] >       "uid": null,
	I0731 20:23:33.458733  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.458742  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.458752  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.458761  158660 command_runner.go:130] >     },
	I0731 20:23:33.458768  158660 command_runner.go:130] >     {
	I0731 20:23:33.458775  158660 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0731 20:23:33.458781  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.458786  158660 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0731 20:23:33.458795  158660 command_runner.go:130] >       ],
	I0731 20:23:33.458805  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.458820  158660 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0731 20:23:33.458834  158660 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0731 20:23:33.458842  158660 command_runner.go:130] >       ],
	I0731 20:23:33.458852  158660 command_runner.go:130] >       "size": "63051080",
	I0731 20:23:33.458861  158660 command_runner.go:130] >       "uid": {
	I0731 20:23:33.458868  158660 command_runner.go:130] >         "value": "0"
	I0731 20:23:33.458872  158660 command_runner.go:130] >       },
	I0731 20:23:33.458879  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.458889  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.458899  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.458908  158660 command_runner.go:130] >     },
	I0731 20:23:33.458916  158660 command_runner.go:130] >     {
	I0731 20:23:33.458926  158660 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0731 20:23:33.458936  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.458946  158660 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0731 20:23:33.458952  158660 command_runner.go:130] >       ],
	I0731 20:23:33.458957  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.458970  158660 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0731 20:23:33.458984  158660 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0731 20:23:33.458993  158660 command_runner.go:130] >       ],
	I0731 20:23:33.459002  158660 command_runner.go:130] >       "size": "750414",
	I0731 20:23:33.459011  158660 command_runner.go:130] >       "uid": {
	I0731 20:23:33.459021  158660 command_runner.go:130] >         "value": "65535"
	I0731 20:23:33.459028  158660 command_runner.go:130] >       },
	I0731 20:23:33.459035  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.459044  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.459051  158660 command_runner.go:130] >       "pinned": true
	I0731 20:23:33.459056  158660 command_runner.go:130] >     }
	I0731 20:23:33.459064  158660 command_runner.go:130] >   ]
	I0731 20:23:33.459072  158660 command_runner.go:130] > }
	I0731 20:23:33.459250  158660 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 20:23:33.459264  158660 cache_images.go:84] Images are preloaded, skipping loading
	I0731 20:23:33.459273  158660 kubeadm.go:934] updating node { 192.168.39.193 8443 v1.30.3 crio true true} ...
	I0731 20:23:33.459497  158660 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-094885 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.193
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-094885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:23:33.459591  158660 ssh_runner.go:195] Run: crio config
	I0731 20:23:33.498850  158660 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0731 20:23:33.498882  158660 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0731 20:23:33.498892  158660 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0731 20:23:33.498896  158660 command_runner.go:130] > #
	I0731 20:23:33.498906  158660 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0731 20:23:33.498915  158660 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0731 20:23:33.498922  158660 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0731 20:23:33.498932  158660 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0731 20:23:33.498938  158660 command_runner.go:130] > # reload'.
	I0731 20:23:33.498950  158660 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0731 20:23:33.498977  158660 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0731 20:23:33.498990  158660 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0731 20:23:33.498999  158660 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0731 20:23:33.499006  158660 command_runner.go:130] > [crio]
	I0731 20:23:33.499016  158660 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0731 20:23:33.499024  158660 command_runner.go:130] > # containers images, in this directory.
	I0731 20:23:33.499032  158660 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0731 20:23:33.499047  158660 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0731 20:23:33.499058  158660 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0731 20:23:33.499070  158660 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0731 20:23:33.499079  158660 command_runner.go:130] > # imagestore = ""
	I0731 20:23:33.499090  158660 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0731 20:23:33.499103  158660 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0731 20:23:33.499110  158660 command_runner.go:130] > storage_driver = "overlay"
	I0731 20:23:33.499121  158660 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0731 20:23:33.499133  158660 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0731 20:23:33.499139  158660 command_runner.go:130] > storage_option = [
	I0731 20:23:33.499148  158660 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0731 20:23:33.499155  158660 command_runner.go:130] > ]
	I0731 20:23:33.499166  158660 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0731 20:23:33.499180  158660 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0731 20:23:33.499190  158660 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0731 20:23:33.499203  158660 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0731 20:23:33.499216  158660 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0731 20:23:33.499226  158660 command_runner.go:130] > # always happen on a node reboot
	I0731 20:23:33.499238  158660 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0731 20:23:33.499254  158660 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0731 20:23:33.499267  158660 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0731 20:23:33.499278  158660 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0731 20:23:33.499290  158660 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0731 20:23:33.499306  158660 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0731 20:23:33.499322  158660 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0731 20:23:33.499336  158660 command_runner.go:130] > # internal_wipe = true
	I0731 20:23:33.499352  158660 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0731 20:23:33.499363  158660 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0731 20:23:33.499394  158660 command_runner.go:130] > # internal_repair = false
	I0731 20:23:33.499405  158660 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0731 20:23:33.499416  158660 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0731 20:23:33.499426  158660 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0731 20:23:33.499438  158660 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0731 20:23:33.499452  158660 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0731 20:23:33.499460  158660 command_runner.go:130] > [crio.api]
	I0731 20:23:33.499470  158660 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0731 20:23:33.499480  158660 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0731 20:23:33.499493  158660 command_runner.go:130] > # IP address on which the stream server will listen.
	I0731 20:23:33.499503  158660 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0731 20:23:33.499517  158660 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0731 20:23:33.499528  158660 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0731 20:23:33.499537  158660 command_runner.go:130] > # stream_port = "0"
	I0731 20:23:33.499548  158660 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0731 20:23:33.499558  158660 command_runner.go:130] > # stream_enable_tls = false
	I0731 20:23:33.499571  158660 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0731 20:23:33.499580  158660 command_runner.go:130] > # stream_idle_timeout = ""
	I0731 20:23:33.499593  158660 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0731 20:23:33.499605  158660 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0731 20:23:33.499611  158660 command_runner.go:130] > # minutes.
	I0731 20:23:33.499620  158660 command_runner.go:130] > # stream_tls_cert = ""
	I0731 20:23:33.499630  158660 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0731 20:23:33.499641  158660 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0731 20:23:33.499650  158660 command_runner.go:130] > # stream_tls_key = ""
	I0731 20:23:33.499660  158660 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0731 20:23:33.499674  158660 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0731 20:23:33.499695  158660 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0731 20:23:33.499704  158660 command_runner.go:130] > # stream_tls_ca = ""
	I0731 20:23:33.499717  158660 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0731 20:23:33.499727  158660 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0731 20:23:33.499742  158660 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0731 20:23:33.499753  158660 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0731 20:23:33.499765  158660 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0731 20:23:33.499776  158660 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0731 20:23:33.499783  158660 command_runner.go:130] > [crio.runtime]
	I0731 20:23:33.499793  158660 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0731 20:23:33.499802  158660 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0731 20:23:33.499811  158660 command_runner.go:130] > # "nofile=1024:2048"
	I0731 20:23:33.499823  158660 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0731 20:23:33.499830  158660 command_runner.go:130] > # default_ulimits = [
	I0731 20:23:33.499834  158660 command_runner.go:130] > # ]
	I0731 20:23:33.499840  158660 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0731 20:23:33.499847  158660 command_runner.go:130] > # no_pivot = false
	I0731 20:23:33.499852  158660 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0731 20:23:33.499861  158660 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0731 20:23:33.499869  158660 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0731 20:23:33.499881  158660 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0731 20:23:33.499892  158660 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0731 20:23:33.499906  158660 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0731 20:23:33.499915  158660 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0731 20:23:33.499922  158660 command_runner.go:130] > # Cgroup setting for conmon
	I0731 20:23:33.499934  158660 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0731 20:23:33.499940  158660 command_runner.go:130] > conmon_cgroup = "pod"
	I0731 20:23:33.499946  158660 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0731 20:23:33.499954  158660 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0731 20:23:33.499964  158660 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0731 20:23:33.499974  158660 command_runner.go:130] > conmon_env = [
	I0731 20:23:33.499984  158660 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0731 20:23:33.499993  158660 command_runner.go:130] > ]
	I0731 20:23:33.500002  158660 command_runner.go:130] > # Additional environment variables to set for all the
	I0731 20:23:33.500013  158660 command_runner.go:130] > # containers. These are overridden if set in the
	I0731 20:23:33.500024  158660 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0731 20:23:33.500034  158660 command_runner.go:130] > # default_env = [
	I0731 20:23:33.500039  158660 command_runner.go:130] > # ]
	I0731 20:23:33.500046  158660 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0731 20:23:33.500057  158660 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0731 20:23:33.500066  158660 command_runner.go:130] > # selinux = false
	I0731 20:23:33.500076  158660 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0731 20:23:33.500090  158660 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0731 20:23:33.500102  158660 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0731 20:23:33.500111  158660 command_runner.go:130] > # seccomp_profile = ""
	I0731 20:23:33.500120  158660 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0731 20:23:33.500128  158660 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0731 20:23:33.500139  158660 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0731 20:23:33.500149  158660 command_runner.go:130] > # which might increase security.
	I0731 20:23:33.500156  158660 command_runner.go:130] > # This option is currently deprecated,
	I0731 20:23:33.500169  158660 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0731 20:23:33.500179  158660 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0731 20:23:33.500192  158660 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0731 20:23:33.500205  158660 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0731 20:23:33.500216  158660 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0731 20:23:33.500225  158660 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0731 20:23:33.500233  158660 command_runner.go:130] > # This option supports live configuration reload.
	I0731 20:23:33.500246  158660 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0731 20:23:33.500256  158660 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0731 20:23:33.500266  158660 command_runner.go:130] > # the cgroup blockio controller.
	I0731 20:23:33.500275  158660 command_runner.go:130] > # blockio_config_file = ""
	I0731 20:23:33.500287  158660 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0731 20:23:33.500297  158660 command_runner.go:130] > # blockio parameters.
	I0731 20:23:33.500306  158660 command_runner.go:130] > # blockio_reload = false
	I0731 20:23:33.500316  158660 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0731 20:23:33.500323  158660 command_runner.go:130] > # irqbalance daemon.
	I0731 20:23:33.500335  158660 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0731 20:23:33.500347  158660 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0731 20:23:33.500358  158660 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0731 20:23:33.500373  158660 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0731 20:23:33.500390  158660 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0731 20:23:33.500401  158660 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0731 20:23:33.500412  158660 command_runner.go:130] > # This option supports live configuration reload.
	I0731 20:23:33.500419  158660 command_runner.go:130] > # rdt_config_file = ""
	I0731 20:23:33.500426  158660 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0731 20:23:33.500436  158660 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0731 20:23:33.500456  158660 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0731 20:23:33.500466  158660 command_runner.go:130] > # separate_pull_cgroup = ""
	I0731 20:23:33.500476  158660 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0731 20:23:33.500488  158660 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0731 20:23:33.500497  158660 command_runner.go:130] > # will be added.
	I0731 20:23:33.500503  158660 command_runner.go:130] > # default_capabilities = [
	I0731 20:23:33.500510  158660 command_runner.go:130] > # 	"CHOWN",
	I0731 20:23:33.500516  158660 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0731 20:23:33.500525  158660 command_runner.go:130] > # 	"FSETID",
	I0731 20:23:33.500531  158660 command_runner.go:130] > # 	"FOWNER",
	I0731 20:23:33.500536  158660 command_runner.go:130] > # 	"SETGID",
	I0731 20:23:33.500542  158660 command_runner.go:130] > # 	"SETUID",
	I0731 20:23:33.500551  158660 command_runner.go:130] > # 	"SETPCAP",
	I0731 20:23:33.500558  158660 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0731 20:23:33.500566  158660 command_runner.go:130] > # 	"KILL",
	I0731 20:23:33.500572  158660 command_runner.go:130] > # ]
	I0731 20:23:33.500587  158660 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0731 20:23:33.500599  158660 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0731 20:23:33.500607  158660 command_runner.go:130] > # add_inheritable_capabilities = false
	I0731 20:23:33.500615  158660 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0731 20:23:33.500627  158660 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0731 20:23:33.500636  158660 command_runner.go:130] > default_sysctls = [
	I0731 20:23:33.500644  158660 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0731 20:23:33.500652  158660 command_runner.go:130] > ]
	I0731 20:23:33.500660  158660 command_runner.go:130] > # List of devices on the host that a
	I0731 20:23:33.500672  158660 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0731 20:23:33.500681  158660 command_runner.go:130] > # allowed_devices = [
	I0731 20:23:33.500690  158660 command_runner.go:130] > # 	"/dev/fuse",
	I0731 20:23:33.500697  158660 command_runner.go:130] > # ]
	I0731 20:23:33.500703  158660 command_runner.go:130] > # List of additional devices. specified as
	I0731 20:23:33.500716  158660 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0731 20:23:33.500727  158660 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0731 20:23:33.500738  158660 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0731 20:23:33.500747  158660 command_runner.go:130] > # additional_devices = [
	I0731 20:23:33.500752  158660 command_runner.go:130] > # ]
	I0731 20:23:33.500763  158660 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0731 20:23:33.500772  158660 command_runner.go:130] > # cdi_spec_dirs = [
	I0731 20:23:33.500778  158660 command_runner.go:130] > # 	"/etc/cdi",
	I0731 20:23:33.500785  158660 command_runner.go:130] > # 	"/var/run/cdi",
	I0731 20:23:33.500788  158660 command_runner.go:130] > # ]
	I0731 20:23:33.500800  158660 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0731 20:23:33.500814  158660 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0731 20:23:33.500824  158660 command_runner.go:130] > # Defaults to false.
	I0731 20:23:33.500832  158660 command_runner.go:130] > # device_ownership_from_security_context = false
	I0731 20:23:33.500844  158660 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0731 20:23:33.500857  158660 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0731 20:23:33.500865  158660 command_runner.go:130] > # hooks_dir = [
	I0731 20:23:33.500873  158660 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0731 20:23:33.500880  158660 command_runner.go:130] > # ]
	I0731 20:23:33.500890  158660 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0731 20:23:33.500903  158660 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0731 20:23:33.500914  158660 command_runner.go:130] > # its default mounts from the following two files:
	I0731 20:23:33.500922  158660 command_runner.go:130] > #
	I0731 20:23:33.500931  158660 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0731 20:23:33.500972  158660 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0731 20:23:33.500993  158660 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0731 20:23:33.501001  158660 command_runner.go:130] > #
	I0731 20:23:33.501015  158660 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0731 20:23:33.501028  158660 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0731 20:23:33.501041  158660 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0731 20:23:33.501049  158660 command_runner.go:130] > #      only add mounts it finds in this file.
	I0731 20:23:33.501055  158660 command_runner.go:130] > #
	I0731 20:23:33.501064  158660 command_runner.go:130] > # default_mounts_file = ""
	I0731 20:23:33.501072  158660 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0731 20:23:33.501086  158660 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0731 20:23:33.501096  158660 command_runner.go:130] > pids_limit = 1024
	I0731 20:23:33.501108  158660 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0731 20:23:33.501120  158660 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0731 20:23:33.501133  158660 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0731 20:23:33.501148  158660 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0731 20:23:33.501158  158660 command_runner.go:130] > # log_size_max = -1
	I0731 20:23:33.501169  158660 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0731 20:23:33.501179  158660 command_runner.go:130] > # log_to_journald = false
	I0731 20:23:33.501189  158660 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0731 20:23:33.501200  158660 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0731 20:23:33.501209  158660 command_runner.go:130] > # Path to directory for container attach sockets.
	I0731 20:23:33.501214  158660 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0731 20:23:33.501225  158660 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0731 20:23:33.501236  158660 command_runner.go:130] > # bind_mount_prefix = ""
	I0731 20:23:33.501245  158660 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0731 20:23:33.501255  158660 command_runner.go:130] > # read_only = false
	I0731 20:23:33.501265  158660 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0731 20:23:33.501278  158660 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0731 20:23:33.501287  158660 command_runner.go:130] > # live configuration reload.
	I0731 20:23:33.501294  158660 command_runner.go:130] > # log_level = "info"
	I0731 20:23:33.501305  158660 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0731 20:23:33.501312  158660 command_runner.go:130] > # This option supports live configuration reload.
	I0731 20:23:33.501321  158660 command_runner.go:130] > # log_filter = ""
	I0731 20:23:33.501330  158660 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0731 20:23:33.501352  158660 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0731 20:23:33.501359  158660 command_runner.go:130] > # separated by comma.
	I0731 20:23:33.501378  158660 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 20:23:33.501387  158660 command_runner.go:130] > # uid_mappings = ""
	I0731 20:23:33.501396  158660 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0731 20:23:33.501408  158660 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0731 20:23:33.501418  158660 command_runner.go:130] > # separated by comma.
	I0731 20:23:33.501431  158660 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 20:23:33.501439  158660 command_runner.go:130] > # gid_mappings = ""
	I0731 20:23:33.501449  158660 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0731 20:23:33.501461  158660 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0731 20:23:33.501470  158660 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0731 20:23:33.501485  158660 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 20:23:33.501495  158660 command_runner.go:130] > # minimum_mappable_uid = -1
	I0731 20:23:33.501505  158660 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0731 20:23:33.501517  158660 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0731 20:23:33.501530  158660 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0731 20:23:33.501545  158660 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 20:23:33.501554  158660 command_runner.go:130] > # minimum_mappable_gid = -1
	I0731 20:23:33.501563  158660 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0731 20:23:33.501576  158660 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0731 20:23:33.501588  158660 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0731 20:23:33.501596  158660 command_runner.go:130] > # ctr_stop_timeout = 30
	I0731 20:23:33.501606  158660 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0731 20:23:33.501619  158660 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0731 20:23:33.501630  158660 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0731 20:23:33.501640  158660 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0731 20:23:33.501647  158660 command_runner.go:130] > drop_infra_ctr = false
	I0731 20:23:33.501658  158660 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0731 20:23:33.501670  158660 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0731 20:23:33.501684  158660 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0731 20:23:33.501694  158660 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0731 20:23:33.501705  158660 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0731 20:23:33.501718  158660 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0731 20:23:33.501730  158660 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0731 20:23:33.501742  158660 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0731 20:23:33.501751  158660 command_runner.go:130] > # shared_cpuset = ""
	I0731 20:23:33.501761  158660 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0731 20:23:33.501773  158660 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0731 20:23:33.501783  158660 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0731 20:23:33.501794  158660 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0731 20:23:33.501801  158660 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0731 20:23:33.501810  158660 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0731 20:23:33.501823  158660 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0731 20:23:33.501830  158660 command_runner.go:130] > # enable_criu_support = false
	I0731 20:23:33.501842  158660 command_runner.go:130] > # Enable/disable the generation of the container,
	I0731 20:23:33.501855  158660 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0731 20:23:33.501865  158660 command_runner.go:130] > # enable_pod_events = false
	I0731 20:23:33.501892  158660 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0731 20:23:33.501908  158660 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0731 20:23:33.501927  158660 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0731 20:23:33.501934  158660 command_runner.go:130] > # default_runtime = "runc"
	I0731 20:23:33.501940  158660 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0731 20:23:33.501947  158660 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0731 20:23:33.501960  158660 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0731 20:23:33.501966  158660 command_runner.go:130] > # creation as a file is not desired either.
	I0731 20:23:33.501975  158660 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0731 20:23:33.501985  158660 command_runner.go:130] > # the hostname is being managed dynamically.
	I0731 20:23:33.501993  158660 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0731 20:23:33.502001  158660 command_runner.go:130] > # ]
	I0731 20:23:33.502013  158660 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0731 20:23:33.502026  158660 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0731 20:23:33.502036  158660 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0731 20:23:33.502047  158660 command_runner.go:130] > # Each entry in the table should follow the format:
	I0731 20:23:33.502056  158660 command_runner.go:130] > #
	I0731 20:23:33.502063  158660 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0731 20:23:33.502073  158660 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0731 20:23:33.502119  158660 command_runner.go:130] > # runtime_type = "oci"
	I0731 20:23:33.502132  158660 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0731 20:23:33.502140  158660 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0731 20:23:33.502150  158660 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0731 20:23:33.502158  158660 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0731 20:23:33.502171  158660 command_runner.go:130] > # monitor_env = []
	I0731 20:23:33.502183  158660 command_runner.go:130] > # privileged_without_host_devices = false
	I0731 20:23:33.502192  158660 command_runner.go:130] > # allowed_annotations = []
	I0731 20:23:33.502201  158660 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0731 20:23:33.502210  158660 command_runner.go:130] > # Where:
	I0731 20:23:33.502218  158660 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0731 20:23:33.502230  158660 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0731 20:23:33.502243  158660 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0731 20:23:33.502255  158660 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0731 20:23:33.502264  158660 command_runner.go:130] > #   in $PATH.
	I0731 20:23:33.502276  158660 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0731 20:23:33.502286  158660 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0731 20:23:33.502298  158660 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0731 20:23:33.502305  158660 command_runner.go:130] > #   state.
	I0731 20:23:33.502313  158660 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0731 20:23:33.502324  158660 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0731 20:23:33.502334  158660 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0731 20:23:33.502347  158660 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0731 20:23:33.502359  158660 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0731 20:23:33.502372  158660 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0731 20:23:33.502386  158660 command_runner.go:130] > #   The currently recognized values are:
	I0731 20:23:33.502396  158660 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0731 20:23:33.502410  158660 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0731 20:23:33.502421  158660 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0731 20:23:33.502435  158660 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0731 20:23:33.502445  158660 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0731 20:23:33.502457  158660 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0731 20:23:33.502472  158660 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0731 20:23:33.502486  158660 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0731 20:23:33.502500  158660 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0731 20:23:33.502513  158660 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0731 20:23:33.502524  158660 command_runner.go:130] > #   deprecated option "conmon".
	I0731 20:23:33.502537  158660 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0731 20:23:33.502547  158660 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0731 20:23:33.502561  158660 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0731 20:23:33.502575  158660 command_runner.go:130] > #   should be moved to the container's cgroup
	I0731 20:23:33.502591  158660 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0731 20:23:33.502602  158660 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0731 20:23:33.502613  158660 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0731 20:23:33.502625  158660 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0731 20:23:33.502630  158660 command_runner.go:130] > #
	I0731 20:23:33.502638  158660 command_runner.go:130] > # Using the seccomp notifier feature:
	I0731 20:23:33.502646  158660 command_runner.go:130] > #
	I0731 20:23:33.502657  158660 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0731 20:23:33.502671  158660 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0731 20:23:33.502678  158660 command_runner.go:130] > #
	I0731 20:23:33.502688  158660 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0731 20:23:33.502701  158660 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0731 20:23:33.502709  158660 command_runner.go:130] > #
	I0731 20:23:33.502720  158660 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0731 20:23:33.502728  158660 command_runner.go:130] > # feature.
	I0731 20:23:33.502734  158660 command_runner.go:130] > #
	I0731 20:23:33.502747  158660 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0731 20:23:33.502760  158660 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0731 20:23:33.502774  158660 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0731 20:23:33.502786  158660 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0731 20:23:33.502798  158660 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0731 20:23:33.502807  158660 command_runner.go:130] > #
	I0731 20:23:33.502817  158660 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0731 20:23:33.502830  158660 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0731 20:23:33.502840  158660 command_runner.go:130] > #
	I0731 20:23:33.502853  158660 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0731 20:23:33.502866  158660 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0731 20:23:33.502874  158660 command_runner.go:130] > #
	I0731 20:23:33.502884  158660 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0731 20:23:33.502897  158660 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0731 20:23:33.502906  158660 command_runner.go:130] > # limitation.
	I0731 20:23:33.502913  158660 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0731 20:23:33.502924  158660 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0731 20:23:33.502932  158660 command_runner.go:130] > runtime_type = "oci"
	I0731 20:23:33.502940  158660 command_runner.go:130] > runtime_root = "/run/runc"
	I0731 20:23:33.502948  158660 command_runner.go:130] > runtime_config_path = ""
	I0731 20:23:33.502959  158660 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0731 20:23:33.502966  158660 command_runner.go:130] > monitor_cgroup = "pod"
	I0731 20:23:33.502975  158660 command_runner.go:130] > monitor_exec_cgroup = ""
	I0731 20:23:33.502982  158660 command_runner.go:130] > monitor_env = [
	I0731 20:23:33.502995  158660 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0731 20:23:33.503003  158660 command_runner.go:130] > ]
	I0731 20:23:33.503011  158660 command_runner.go:130] > privileged_without_host_devices = false
	I0731 20:23:33.503025  158660 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0731 20:23:33.503036  158660 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0731 20:23:33.503050  158660 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0731 20:23:33.503065  158660 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0731 20:23:33.503081  158660 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0731 20:23:33.503094  158660 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0731 20:23:33.503116  158660 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0731 20:23:33.503132  158660 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0731 20:23:33.503144  158660 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0731 20:23:33.503157  158660 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0731 20:23:33.503161  158660 command_runner.go:130] > # Example:
	I0731 20:23:33.503167  158660 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0731 20:23:33.503178  158660 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0731 20:23:33.503187  158660 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0731 20:23:33.503194  158660 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0731 20:23:33.503200  158660 command_runner.go:130] > # cpuset = 0
	I0731 20:23:33.503207  158660 command_runner.go:130] > # cpushares = "0-1"
	I0731 20:23:33.503214  158660 command_runner.go:130] > # Where:
	I0731 20:23:33.503222  158660 command_runner.go:130] > # The workload name is workload-type.
	I0731 20:23:33.503232  158660 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0731 20:23:33.503240  158660 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0731 20:23:33.503249  158660 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0731 20:23:33.503261  158660 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0731 20:23:33.503271  158660 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0731 20:23:33.503279  158660 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0731 20:23:33.503290  158660 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0731 20:23:33.503297  158660 command_runner.go:130] > # Default value is set to true
	I0731 20:23:33.503305  158660 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0731 20:23:33.503317  158660 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0731 20:23:33.503328  158660 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0731 20:23:33.503339  158660 command_runner.go:130] > # Default value is set to 'false'
	I0731 20:23:33.503349  158660 command_runner.go:130] > # disable_hostport_mapping = false
	I0731 20:23:33.503363  158660 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0731 20:23:33.503372  158660 command_runner.go:130] > #
	I0731 20:23:33.503392  158660 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0731 20:23:33.503407  158660 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0731 20:23:33.503421  158660 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0731 20:23:33.503434  158660 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0731 20:23:33.503446  158660 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0731 20:23:33.503456  158660 command_runner.go:130] > [crio.image]
	I0731 20:23:33.503466  158660 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0731 20:23:33.503476  158660 command_runner.go:130] > # default_transport = "docker://"
	I0731 20:23:33.503489  158660 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0731 20:23:33.503502  158660 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0731 20:23:33.503510  158660 command_runner.go:130] > # global_auth_file = ""
	I0731 20:23:33.503518  158660 command_runner.go:130] > # The image used to instantiate infra containers.
	I0731 20:23:33.503524  158660 command_runner.go:130] > # This option supports live configuration reload.
	I0731 20:23:33.503530  158660 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0731 20:23:33.503536  158660 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0731 20:23:33.503544  158660 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0731 20:23:33.503549  158660 command_runner.go:130] > # This option supports live configuration reload.
	I0731 20:23:33.503556  158660 command_runner.go:130] > # pause_image_auth_file = ""
	I0731 20:23:33.503561  158660 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0731 20:23:33.503570  158660 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0731 20:23:33.503577  158660 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0731 20:23:33.503585  158660 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0731 20:23:33.503589  158660 command_runner.go:130] > # pause_command = "/pause"
	I0731 20:23:33.503596  158660 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0731 20:23:33.503602  158660 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0731 20:23:33.503608  158660 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0731 20:23:33.503613  158660 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0731 20:23:33.503621  158660 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0731 20:23:33.503628  158660 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0731 20:23:33.503634  158660 command_runner.go:130] > # pinned_images = [
	I0731 20:23:33.503637  158660 command_runner.go:130] > # ]
	I0731 20:23:33.503643  158660 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0731 20:23:33.503651  158660 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0731 20:23:33.503657  158660 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0731 20:23:33.503664  158660 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0731 20:23:33.503673  158660 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0731 20:23:33.503682  158660 command_runner.go:130] > # signature_policy = ""
	I0731 20:23:33.503691  158660 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0731 20:23:33.503701  158660 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0731 20:23:33.503707  158660 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0731 20:23:33.503715  158660 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0731 20:23:33.503721  158660 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0731 20:23:33.503728  158660 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0731 20:23:33.503733  158660 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0731 20:23:33.503741  158660 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0731 20:23:33.503747  158660 command_runner.go:130] > # changing them here.
	I0731 20:23:33.503751  158660 command_runner.go:130] > # insecure_registries = [
	I0731 20:23:33.503755  158660 command_runner.go:130] > # ]
	I0731 20:23:33.503762  158660 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0731 20:23:33.503770  158660 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0731 20:23:33.503774  158660 command_runner.go:130] > # image_volumes = "mkdir"
	I0731 20:23:33.503780  158660 command_runner.go:130] > # Temporary directory to use for storing big files
	I0731 20:23:33.503785  158660 command_runner.go:130] > # big_files_temporary_dir = ""
	I0731 20:23:33.503792  158660 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0731 20:23:33.503798  158660 command_runner.go:130] > # CNI plugins.
	I0731 20:23:33.503802  158660 command_runner.go:130] > [crio.network]
	I0731 20:23:33.503810  158660 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0731 20:23:33.503815  158660 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0731 20:23:33.503821  158660 command_runner.go:130] > # cni_default_network = ""
	I0731 20:23:33.503826  158660 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0731 20:23:33.503832  158660 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0731 20:23:33.503838  158660 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0731 20:23:33.503843  158660 command_runner.go:130] > # plugin_dirs = [
	I0731 20:23:33.503847  158660 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0731 20:23:33.503850  158660 command_runner.go:130] > # ]
	I0731 20:23:33.503857  158660 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0731 20:23:33.503861  158660 command_runner.go:130] > [crio.metrics]
	I0731 20:23:33.503865  158660 command_runner.go:130] > # Globally enable or disable metrics support.
	I0731 20:23:33.503872  158660 command_runner.go:130] > enable_metrics = true
	I0731 20:23:33.503876  158660 command_runner.go:130] > # Specify enabled metrics collectors.
	I0731 20:23:33.503883  158660 command_runner.go:130] > # Per default all metrics are enabled.
	I0731 20:23:33.503888  158660 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0731 20:23:33.503896  158660 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0731 20:23:33.503902  158660 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0731 20:23:33.503906  158660 command_runner.go:130] > # metrics_collectors = [
	I0731 20:23:33.503910  158660 command_runner.go:130] > # 	"operations",
	I0731 20:23:33.503916  158660 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0731 20:23:33.503923  158660 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0731 20:23:33.503929  158660 command_runner.go:130] > # 	"operations_errors",
	I0731 20:23:33.503933  158660 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0731 20:23:33.503937  158660 command_runner.go:130] > # 	"image_pulls_by_name",
	I0731 20:23:33.503942  158660 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0731 20:23:33.503946  158660 command_runner.go:130] > # 	"image_pulls_failures",
	I0731 20:23:33.503950  158660 command_runner.go:130] > # 	"image_pulls_successes",
	I0731 20:23:33.503957  158660 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0731 20:23:33.503960  158660 command_runner.go:130] > # 	"image_layer_reuse",
	I0731 20:23:33.503967  158660 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0731 20:23:33.503971  158660 command_runner.go:130] > # 	"containers_oom_total",
	I0731 20:23:33.503975  158660 command_runner.go:130] > # 	"containers_oom",
	I0731 20:23:33.503979  158660 command_runner.go:130] > # 	"processes_defunct",
	I0731 20:23:33.503985  158660 command_runner.go:130] > # 	"operations_total",
	I0731 20:23:33.503990  158660 command_runner.go:130] > # 	"operations_latency_seconds",
	I0731 20:23:33.503997  158660 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0731 20:23:33.504001  158660 command_runner.go:130] > # 	"operations_errors_total",
	I0731 20:23:33.504007  158660 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0731 20:23:33.504012  158660 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0731 20:23:33.504018  158660 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0731 20:23:33.504022  158660 command_runner.go:130] > # 	"image_pulls_success_total",
	I0731 20:23:33.504028  158660 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0731 20:23:33.504032  158660 command_runner.go:130] > # 	"containers_oom_count_total",
	I0731 20:23:33.504039  158660 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0731 20:23:33.504043  158660 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0731 20:23:33.504048  158660 command_runner.go:130] > # ]
	I0731 20:23:33.504053  158660 command_runner.go:130] > # The port on which the metrics server will listen.
	I0731 20:23:33.504059  158660 command_runner.go:130] > # metrics_port = 9090
	I0731 20:23:33.504065  158660 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0731 20:23:33.504071  158660 command_runner.go:130] > # metrics_socket = ""
	I0731 20:23:33.504076  158660 command_runner.go:130] > # The certificate for the secure metrics server.
	I0731 20:23:33.504084  158660 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0731 20:23:33.504091  158660 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0731 20:23:33.504097  158660 command_runner.go:130] > # certificate on any modification event.
	I0731 20:23:33.504102  158660 command_runner.go:130] > # metrics_cert = ""
	I0731 20:23:33.504108  158660 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0731 20:23:33.504113  158660 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0731 20:23:33.504119  158660 command_runner.go:130] > # metrics_key = ""
	I0731 20:23:33.504124  158660 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0731 20:23:33.504130  158660 command_runner.go:130] > [crio.tracing]
	I0731 20:23:33.504135  158660 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0731 20:23:33.504141  158660 command_runner.go:130] > # enable_tracing = false
	I0731 20:23:33.504146  158660 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0731 20:23:33.504153  158660 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0731 20:23:33.504159  158660 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0731 20:23:33.504166  158660 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0731 20:23:33.504170  158660 command_runner.go:130] > # CRI-O NRI configuration.
	I0731 20:23:33.504173  158660 command_runner.go:130] > [crio.nri]
	I0731 20:23:33.504178  158660 command_runner.go:130] > # Globally enable or disable NRI.
	I0731 20:23:33.504183  158660 command_runner.go:130] > # enable_nri = false
	I0731 20:23:33.504189  158660 command_runner.go:130] > # NRI socket to listen on.
	I0731 20:23:33.504195  158660 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0731 20:23:33.504199  158660 command_runner.go:130] > # NRI plugin directory to use.
	I0731 20:23:33.504206  158660 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0731 20:23:33.504210  158660 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0731 20:23:33.504220  158660 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0731 20:23:33.504228  158660 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0731 20:23:33.504232  158660 command_runner.go:130] > # nri_disable_connections = false
	I0731 20:23:33.504239  158660 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0731 20:23:33.504244  158660 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0731 20:23:33.504251  158660 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0731 20:23:33.504255  158660 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0731 20:23:33.504261  158660 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0731 20:23:33.504265  158660 command_runner.go:130] > [crio.stats]
	I0731 20:23:33.504272  158660 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0731 20:23:33.504279  158660 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0731 20:23:33.504283  158660 command_runner.go:130] > # stats_collection_period = 0
	I0731 20:23:33.504926  158660 command_runner.go:130] ! time="2024-07-31 20:23:33.461945345Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0731 20:23:33.504961  158660 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0731 20:23:33.505192  158660 cni.go:84] Creating CNI manager for ""
	I0731 20:23:33.505208  158660 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0731 20:23:33.505219  158660 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:23:33.505246  158660 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.193 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-094885 NodeName:multinode-094885 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.193"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.193 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 20:23:33.505435  158660 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.193
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-094885"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.193
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.193"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:23:33.505516  158660 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 20:23:33.515784  158660 command_runner.go:130] > kubeadm
	I0731 20:23:33.515809  158660 command_runner.go:130] > kubectl
	I0731 20:23:33.515815  158660 command_runner.go:130] > kubelet
	I0731 20:23:33.515841  158660 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:23:33.515887  158660 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 20:23:33.525939  158660 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0731 20:23:33.542866  158660 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:23:33.560206  158660 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0731 20:23:33.577802  158660 ssh_runner.go:195] Run: grep 192.168.39.193	control-plane.minikube.internal$ /etc/hosts
	I0731 20:23:33.581933  158660 command_runner.go:130] > 192.168.39.193	control-plane.minikube.internal
	I0731 20:23:33.582028  158660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:23:33.721738  158660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:23:33.737326  158660 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/multinode-094885 for IP: 192.168.39.193
	I0731 20:23:33.737359  158660 certs.go:194] generating shared ca certs ...
	I0731 20:23:33.737380  158660 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:23:33.737557  158660 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 20:23:33.737598  158660 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 20:23:33.737608  158660 certs.go:256] generating profile certs ...
	I0731 20:23:33.737700  158660 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/multinode-094885/client.key
	I0731 20:23:33.737743  158660 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/multinode-094885/apiserver.key.3eab5c8e
	I0731 20:23:33.737782  158660 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/multinode-094885/proxy-client.key
	I0731 20:23:33.737806  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 20:23:33.737820  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 20:23:33.737831  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 20:23:33.737841  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 20:23:33.737850  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/multinode-094885/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 20:23:33.737863  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/multinode-094885/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 20:23:33.737873  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/multinode-094885/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 20:23:33.737885  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/multinode-094885/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 20:23:33.737935  158660 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 20:23:33.737961  158660 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 20:23:33.737971  158660 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 20:23:33.737990  158660 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:23:33.738015  158660 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:23:33.738036  158660 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 20:23:33.738071  158660 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:23:33.738096  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:23:33.738109  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem -> /usr/share/ca-certificates/128891.pem
	I0731 20:23:33.738121  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> /usr/share/ca-certificates/1288912.pem
	I0731 20:23:33.738662  158660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:23:33.763788  158660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:23:33.788883  158660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:23:33.813822  158660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 20:23:33.838589  158660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/multinode-094885/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 20:23:33.863142  158660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/multinode-094885/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 20:23:33.887890  158660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/multinode-094885/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:23:33.912118  158660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/multinode-094885/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 20:23:33.936282  158660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:23:33.960194  158660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 20:23:33.984346  158660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 20:23:34.008173  158660 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:23:34.025242  158660 ssh_runner.go:195] Run: openssl version
	I0731 20:23:34.031227  158660 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0731 20:23:34.031360  158660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:23:34.042500  158660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:23:34.047217  158660 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:23:34.047258  158660 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:23:34.047304  158660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:23:34.053012  158660 command_runner.go:130] > b5213941
	I0731 20:23:34.053180  158660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:23:34.062920  158660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 20:23:34.073955  158660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 20:23:34.078709  158660 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 20:23:34.078732  158660 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 20:23:34.078779  158660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 20:23:34.084688  158660 command_runner.go:130] > 51391683
	I0731 20:23:34.084755  158660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 20:23:34.094387  158660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 20:23:34.105505  158660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 20:23:34.110522  158660 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 20:23:34.110598  158660 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 20:23:34.110649  158660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 20:23:34.116274  158660 command_runner.go:130] > 3ec20f2e
	I0731 20:23:34.116585  158660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:23:34.126536  158660 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:23:34.131200  158660 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:23:34.131225  158660 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0731 20:23:34.131233  158660 command_runner.go:130] > Device: 253,1	Inode: 533291      Links: 1
	I0731 20:23:34.131243  158660 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 20:23:34.131251  158660 command_runner.go:130] > Access: 2024-07-31 20:16:25.458336209 +0000
	I0731 20:23:34.131258  158660 command_runner.go:130] > Modify: 2024-07-31 20:16:25.458336209 +0000
	I0731 20:23:34.131266  158660 command_runner.go:130] > Change: 2024-07-31 20:16:25.458336209 +0000
	I0731 20:23:34.131273  158660 command_runner.go:130] >  Birth: 2024-07-31 20:16:25.458336209 +0000
	I0731 20:23:34.131382  158660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 20:23:34.137061  158660 command_runner.go:130] > Certificate will not expire
	I0731 20:23:34.137307  158660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 20:23:34.143170  158660 command_runner.go:130] > Certificate will not expire
	I0731 20:23:34.143235  158660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 20:23:34.148758  158660 command_runner.go:130] > Certificate will not expire
	I0731 20:23:34.148825  158660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 20:23:34.154670  158660 command_runner.go:130] > Certificate will not expire
	I0731 20:23:34.154733  158660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 20:23:34.160397  158660 command_runner.go:130] > Certificate will not expire
	I0731 20:23:34.160694  158660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 20:23:34.166284  158660 command_runner.go:130] > Certificate will not expire
	I0731 20:23:34.166591  158660 kubeadm.go:392] StartCluster: {Name:multinode-094885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-094885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.53 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:23:34.166738  158660 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:23:34.166795  158660 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:23:34.202327  158660 command_runner.go:130] > 7ccbc51911f88b2cea53f55b4e9226d72df1a15a63947dccb6900e21b71381fb
	I0731 20:23:34.202363  158660 command_runner.go:130] > aebb97e9d0b5666e5da4442730c50929905272ee9c25c006a4c9e5eda35ef98b
	I0731 20:23:34.202373  158660 command_runner.go:130] > 72e862487909d32a034d59c9ec722d2003e2ea4b858f2736fe642cce09f2c230
	I0731 20:23:34.202384  158660 command_runner.go:130] > 86082e9a17e18abeef414a96a2ce5e84ea762b3b9eae19e1e48e9a8b5d49804a
	I0731 20:23:34.202581  158660 command_runner.go:130] > 4d7a4222195e163501ef8c970cf2272d4d92203c6b85fadf372dc530a5ff2761
	I0731 20:23:34.202624  158660 command_runner.go:130] > 3cb70dfa50e8e788940f8a1dc034720fb6ac7ad4b9ccbc7338f3428637dab8b9
	I0731 20:23:34.202636  158660 command_runner.go:130] > bd55ce3db2a7d9e442823271d3bbfa8562e77c7ae881497975e66ea7e6547a6d
	I0731 20:23:34.202647  158660 command_runner.go:130] > 25141b1279c4b01c415837cb60597cae930af0d465ad070502ff71a3e82b4afb
	I0731 20:23:34.202722  158660 command_runner.go:130] > 1d62542ea5da5222f8b762ce76723d43560cda4cc13ef73726c65608d6ef6521
	I0731 20:23:34.204304  158660 cri.go:89] found id: "7ccbc51911f88b2cea53f55b4e9226d72df1a15a63947dccb6900e21b71381fb"
	I0731 20:23:34.204321  158660 cri.go:89] found id: "aebb97e9d0b5666e5da4442730c50929905272ee9c25c006a4c9e5eda35ef98b"
	I0731 20:23:34.204337  158660 cri.go:89] found id: "72e862487909d32a034d59c9ec722d2003e2ea4b858f2736fe642cce09f2c230"
	I0731 20:23:34.204341  158660 cri.go:89] found id: "86082e9a17e18abeef414a96a2ce5e84ea762b3b9eae19e1e48e9a8b5d49804a"
	I0731 20:23:34.204345  158660 cri.go:89] found id: "4d7a4222195e163501ef8c970cf2272d4d92203c6b85fadf372dc530a5ff2761"
	I0731 20:23:34.204350  158660 cri.go:89] found id: "3cb70dfa50e8e788940f8a1dc034720fb6ac7ad4b9ccbc7338f3428637dab8b9"
	I0731 20:23:34.204354  158660 cri.go:89] found id: "bd55ce3db2a7d9e442823271d3bbfa8562e77c7ae881497975e66ea7e6547a6d"
	I0731 20:23:34.204358  158660 cri.go:89] found id: "25141b1279c4b01c415837cb60597cae930af0d465ad070502ff71a3e82b4afb"
	I0731 20:23:34.204362  158660 cri.go:89] found id: "1d62542ea5da5222f8b762ce76723d43560cda4cc13ef73726c65608d6ef6521"
	I0731 20:23:34.204369  158660 cri.go:89] found id: ""
	I0731 20:23:34.204426  158660 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 31 20:25:18 multinode-094885 crio[2971]: time="2024-07-31 20:25:18.371608603Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722457518371527673,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be326c7d-9df8-4745-a3aa-5c0ffa887679 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:25:18 multinode-094885 crio[2971]: time="2024-07-31 20:25:18.372202456Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=24431f0f-93e8-4018-8dbc-6c5ce7673c3c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:25:18 multinode-094885 crio[2971]: time="2024-07-31 20:25:18.372323135Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=24431f0f-93e8-4018-8dbc-6c5ce7673c3c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:25:18 multinode-094885 crio[2971]: time="2024-07-31 20:25:18.372655612Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:15436fdd715785b635301ef11a649bb91a95d21320efe544557f270eace6df3f,PodSandboxId:1906f71f375b03f83f43c1528754732e1bfab0e9bbbf79523cacefa9ae27f715,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722457455146360216,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwlpt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49fb91bd-1c6c-4dfb-af51-7f1604463b26,},Annotations:map[string]string{io.kubernetes.container.hash: dba96ca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64ad53c3587b5568b79502878d17b766cf54e7c07bcc3fef95758ce5918270c3,PodSandboxId:8a835dbc9646b109d083034ec8434f3d59a55bb87e020ad62206eeaa03be1fca,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722457421633891789,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-glw6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9252257a-3126-4945-8013-bbd3a4c9f820,},Annotations:map[string]string{io.kubernetes.container.hash: 4a7249b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e9d628cf7a5a1a13be57953e39388980cf20ecbb7d664dc6876fb4361aa3c1,PodSandboxId:ee745ccf6b1f15ef5701ee62a3ed93b59dcb2e4d5aeb23a16c40b9ea0cfe93a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722457421587768075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sh4fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34113636-7979-4b54-bf2a-37c49178450d,},Annotations:map[string]string{io.kubernetes.container.hash: 63f8e4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97d9921b6b5b5e2e94ac6d8334ef0a99462ed8bb280be53483e354fc701ed19,PodSandboxId:9f1923274a45e66fdb7dba3d2b0ea4762ccdd3142a59f5b923bc4b6eee444280,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722457421465473809,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 360a1917-a5a8-4093-b355-c774cccc8548,},An
notations:map[string]string{io.kubernetes.container.hash: b389d224,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd55608813a0f0c36d2a388e76d2741aea9db7517c652637946f5d9ad76acd5,PodSandboxId:7775de6c551cfa697c443a1c10393f4987772634e09d1fe63430f301d84e5fc9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722457421417028604,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vcsv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474aef7b-6525-439f-baa8-801e799ea6a7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 699f1ad0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95e1203585db282d87e855e71382c41a4bb300ef267cff506afeb8117170c7b3,PodSandboxId:225fc91b7219deee45bab76b3fb7e7adf461f40fdaa6d410cda4871f1c90fc75,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722457416548041781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff917910c99be5ca87c83a0532756771,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:170e2ce2375b5d347dd27a7f6671e582c5e4f2eb1fa1be6c22012910ce5c5119,PodSandboxId:76048d11be0b39034995b7a3f5beb46372177f5072ed6a70450fdac83707a0da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722457416478564127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed18c908b1631740f056181e183d629b,},Annotations:map[string]string{io.kub
ernetes.container.hash: 3e59ec47,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ccb2a4daa9e2e76efe02e7d9f73767ad460acbd85dacbb0a3beacd058c19f85,PodSandboxId:a680464821f8e0e9fede1b3977123f951d9fb3c5ce53dec2842e5ce3272799ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722457416512645124,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293234958929e5b2f40fcf9fe89f059c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c155ec9b0f0ae3fae47802262dec33f8c36c8fd1727326b616cb03ec5e7c2f83,PodSandboxId:eaf65f3cefe47f366ebebdf1c1b7fd10b0193a8b4de60003ad3f5ed12ec7fcb8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722457416455729519,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b066168dce1fb13b29b0e5215f2e4c17,},Annotations:map[string]string{io.kubernetes.container.hash: 7c0abe50,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ccbc51911f88b2cea53f55b4e9226d72df1a15a63947dccb6900e21b71381fb,PodSandboxId:cb7c00fd54be8529819c7f6fbe71d0abd6baf362ef761c93c3fb16f926ae1a33,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722457402800817088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sh4fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34113636-7979-4b54-bf2a-37c49178450d,},Annotations:map[string]string{io.kubernetes.container.hash: 63f8e4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d04b8842076b61a34fd5b5ecfe3702a29f477a4e4f35542098783b88c33a82ca,PodSandboxId:2659ab64605639979224b37c4547b2024bc05913f45a9a7bb405ec83131ae9af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722457082072782326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwlpt,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49fb91bd-1c6c-4dfb-af51-7f1604463b26,},Annotations:map[string]string{io.kubernetes.container.hash: dba96ca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72e862487909d32a034d59c9ec722d2003e2ea4b858f2736fe642cce09f2c230,PodSandboxId:462cb7067877aab3a2ecfea2172d63ecd9b051871faa6d504453347a15e22619,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722457024342454654,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 360a1917-a5a8-4093-b355-c774cccc8548,},Annotations:map[string]string{io.kubernetes.container.hash: b389d224,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86082e9a17e18abeef414a96a2ce5e84ea762b3b9eae19e1e48e9a8b5d49804a,PodSandboxId:d53634fcc825315ef3f58ad427820c1422931f1314b749c2f36bf1d2a5d16d77,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722457012348726824,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-glw6d,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 9252257a-3126-4945-8013-bbd3a4c9f820,},Annotations:map[string]string{io.kubernetes.container.hash: 4a7249b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d7a4222195e163501ef8c970cf2272d4d92203c6b85fadf372dc530a5ff2761,PodSandboxId:bee3609a504470e74917d47a74616ca3798ef90df0b6d23171ae00239775d808,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722457008537165511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vcsv5,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 474aef7b-6525-439f-baa8-801e799ea6a7,},Annotations:map[string]string{io.kubernetes.container.hash: 699f1ad0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cb70dfa50e8e788940f8a1dc034720fb6ac7ad4b9ccbc7338f3428637dab8b9,PodSandboxId:ddd0df2e6857e8fb0ac2f5fb7b3deb0327e935ebf77ea225542a00732dc05300,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722456989271588300,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b066168dce1fb13b29b0e5215f2e4c
17,},Annotations:map[string]string{io.kubernetes.container.hash: 7c0abe50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25141b1279c4b01c415837cb60597cae930af0d465ad070502ff71a3e82b4afb,PodSandboxId:3027554ccce1d345e3b6c8beb43cbee5573a4675e09e728533c0e6c178a996f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722456989234785043,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293234958929e5b2f40fcf9fe89f059c,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd55ce3db2a7d9e442823271d3bbfa8562e77c7ae881497975e66ea7e6547a6d,PodSandboxId:e2ffd4f3dae22fb8ff47764ca6c9f49bad4926aa0fee04c879f049bd513c68e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722456989237785971,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed18c908b1631740f056181e183d629b,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 3e59ec47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d62542ea5da5222f8b762ce76723d43560cda4cc13ef73726c65608d6ef6521,PodSandboxId:7d22a0fb42285db96ac26c365a427ad06da26d5291b1544cece1e6dc093ab549,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722456989177786195,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff917910c99be5ca87c83a0532756771,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=24431f0f-93e8-4018-8dbc-6c5ce7673c3c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:25:18 multinode-094885 crio[2971]: time="2024-07-31 20:25:18.414734318Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fbeef8d4-de99-4ee9-a0a0-9c2d5bd51162 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:25:18 multinode-094885 crio[2971]: time="2024-07-31 20:25:18.414825869Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fbeef8d4-de99-4ee9-a0a0-9c2d5bd51162 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:25:18 multinode-094885 crio[2971]: time="2024-07-31 20:25:18.415870978Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3318e14e-3000-4df6-80ef-e184f435a239 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:25:18 multinode-094885 crio[2971]: time="2024-07-31 20:25:18.416379755Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722457518416352883,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3318e14e-3000-4df6-80ef-e184f435a239 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:25:18 multinode-094885 crio[2971]: time="2024-07-31 20:25:18.416834220Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=097552a3-ec75-4032-9cdc-2175dec51402 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:25:18 multinode-094885 crio[2971]: time="2024-07-31 20:25:18.416909202Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=097552a3-ec75-4032-9cdc-2175dec51402 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:25:18 multinode-094885 crio[2971]: time="2024-07-31 20:25:18.417318281Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:15436fdd715785b635301ef11a649bb91a95d21320efe544557f270eace6df3f,PodSandboxId:1906f71f375b03f83f43c1528754732e1bfab0e9bbbf79523cacefa9ae27f715,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722457455146360216,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwlpt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49fb91bd-1c6c-4dfb-af51-7f1604463b26,},Annotations:map[string]string{io.kubernetes.container.hash: dba96ca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64ad53c3587b5568b79502878d17b766cf54e7c07bcc3fef95758ce5918270c3,PodSandboxId:8a835dbc9646b109d083034ec8434f3d59a55bb87e020ad62206eeaa03be1fca,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722457421633891789,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-glw6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9252257a-3126-4945-8013-bbd3a4c9f820,},Annotations:map[string]string{io.kubernetes.container.hash: 4a7249b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e9d628cf7a5a1a13be57953e39388980cf20ecbb7d664dc6876fb4361aa3c1,PodSandboxId:ee745ccf6b1f15ef5701ee62a3ed93b59dcb2e4d5aeb23a16c40b9ea0cfe93a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722457421587768075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sh4fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34113636-7979-4b54-bf2a-37c49178450d,},Annotations:map[string]string{io.kubernetes.container.hash: 63f8e4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97d9921b6b5b5e2e94ac6d8334ef0a99462ed8bb280be53483e354fc701ed19,PodSandboxId:9f1923274a45e66fdb7dba3d2b0ea4762ccdd3142a59f5b923bc4b6eee444280,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722457421465473809,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 360a1917-a5a8-4093-b355-c774cccc8548,},An
notations:map[string]string{io.kubernetes.container.hash: b389d224,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd55608813a0f0c36d2a388e76d2741aea9db7517c652637946f5d9ad76acd5,PodSandboxId:7775de6c551cfa697c443a1c10393f4987772634e09d1fe63430f301d84e5fc9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722457421417028604,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vcsv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474aef7b-6525-439f-baa8-801e799ea6a7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 699f1ad0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95e1203585db282d87e855e71382c41a4bb300ef267cff506afeb8117170c7b3,PodSandboxId:225fc91b7219deee45bab76b3fb7e7adf461f40fdaa6d410cda4871f1c90fc75,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722457416548041781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff917910c99be5ca87c83a0532756771,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:170e2ce2375b5d347dd27a7f6671e582c5e4f2eb1fa1be6c22012910ce5c5119,PodSandboxId:76048d11be0b39034995b7a3f5beb46372177f5072ed6a70450fdac83707a0da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722457416478564127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed18c908b1631740f056181e183d629b,},Annotations:map[string]string{io.kub
ernetes.container.hash: 3e59ec47,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ccb2a4daa9e2e76efe02e7d9f73767ad460acbd85dacbb0a3beacd058c19f85,PodSandboxId:a680464821f8e0e9fede1b3977123f951d9fb3c5ce53dec2842e5ce3272799ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722457416512645124,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293234958929e5b2f40fcf9fe89f059c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c155ec9b0f0ae3fae47802262dec33f8c36c8fd1727326b616cb03ec5e7c2f83,PodSandboxId:eaf65f3cefe47f366ebebdf1c1b7fd10b0193a8b4de60003ad3f5ed12ec7fcb8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722457416455729519,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b066168dce1fb13b29b0e5215f2e4c17,},Annotations:map[string]string{io.kubernetes.container.hash: 7c0abe50,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ccbc51911f88b2cea53f55b4e9226d72df1a15a63947dccb6900e21b71381fb,PodSandboxId:cb7c00fd54be8529819c7f6fbe71d0abd6baf362ef761c93c3fb16f926ae1a33,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722457402800817088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sh4fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34113636-7979-4b54-bf2a-37c49178450d,},Annotations:map[string]string{io.kubernetes.container.hash: 63f8e4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d04b8842076b61a34fd5b5ecfe3702a29f477a4e4f35542098783b88c33a82ca,PodSandboxId:2659ab64605639979224b37c4547b2024bc05913f45a9a7bb405ec83131ae9af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722457082072782326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwlpt,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49fb91bd-1c6c-4dfb-af51-7f1604463b26,},Annotations:map[string]string{io.kubernetes.container.hash: dba96ca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72e862487909d32a034d59c9ec722d2003e2ea4b858f2736fe642cce09f2c230,PodSandboxId:462cb7067877aab3a2ecfea2172d63ecd9b051871faa6d504453347a15e22619,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722457024342454654,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 360a1917-a5a8-4093-b355-c774cccc8548,},Annotations:map[string]string{io.kubernetes.container.hash: b389d224,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86082e9a17e18abeef414a96a2ce5e84ea762b3b9eae19e1e48e9a8b5d49804a,PodSandboxId:d53634fcc825315ef3f58ad427820c1422931f1314b749c2f36bf1d2a5d16d77,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722457012348726824,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-glw6d,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 9252257a-3126-4945-8013-bbd3a4c9f820,},Annotations:map[string]string{io.kubernetes.container.hash: 4a7249b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d7a4222195e163501ef8c970cf2272d4d92203c6b85fadf372dc530a5ff2761,PodSandboxId:bee3609a504470e74917d47a74616ca3798ef90df0b6d23171ae00239775d808,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722457008537165511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vcsv5,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 474aef7b-6525-439f-baa8-801e799ea6a7,},Annotations:map[string]string{io.kubernetes.container.hash: 699f1ad0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cb70dfa50e8e788940f8a1dc034720fb6ac7ad4b9ccbc7338f3428637dab8b9,PodSandboxId:ddd0df2e6857e8fb0ac2f5fb7b3deb0327e935ebf77ea225542a00732dc05300,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722456989271588300,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b066168dce1fb13b29b0e5215f2e4c
17,},Annotations:map[string]string{io.kubernetes.container.hash: 7c0abe50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25141b1279c4b01c415837cb60597cae930af0d465ad070502ff71a3e82b4afb,PodSandboxId:3027554ccce1d345e3b6c8beb43cbee5573a4675e09e728533c0e6c178a996f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722456989234785043,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293234958929e5b2f40fcf9fe89f059c,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd55ce3db2a7d9e442823271d3bbfa8562e77c7ae881497975e66ea7e6547a6d,PodSandboxId:e2ffd4f3dae22fb8ff47764ca6c9f49bad4926aa0fee04c879f049bd513c68e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722456989237785971,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed18c908b1631740f056181e183d629b,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 3e59ec47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d62542ea5da5222f8b762ce76723d43560cda4cc13ef73726c65608d6ef6521,PodSandboxId:7d22a0fb42285db96ac26c365a427ad06da26d5291b1544cece1e6dc093ab549,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722456989177786195,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff917910c99be5ca87c83a0532756771,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=097552a3-ec75-4032-9cdc-2175dec51402 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:25:18 multinode-094885 crio[2971]: time="2024-07-31 20:25:18.456389299Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=78305083-47fa-4e70-8b4a-c64d16171928 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:25:18 multinode-094885 crio[2971]: time="2024-07-31 20:25:18.456467941Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=78305083-47fa-4e70-8b4a-c64d16171928 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:25:18 multinode-094885 crio[2971]: time="2024-07-31 20:25:18.457514034Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bc989b26-cb68-480d-aef6-152f05b8424c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:25:18 multinode-094885 crio[2971]: time="2024-07-31 20:25:18.457925143Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722457518457904054,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bc989b26-cb68-480d-aef6-152f05b8424c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:25:18 multinode-094885 crio[2971]: time="2024-07-31 20:25:18.461151462Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a44f335d-e84d-445d-91b6-09e84a70026d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:25:18 multinode-094885 crio[2971]: time="2024-07-31 20:25:18.461454384Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a44f335d-e84d-445d-91b6-09e84a70026d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:25:18 multinode-094885 crio[2971]: time="2024-07-31 20:25:18.462095334Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:15436fdd715785b635301ef11a649bb91a95d21320efe544557f270eace6df3f,PodSandboxId:1906f71f375b03f83f43c1528754732e1bfab0e9bbbf79523cacefa9ae27f715,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722457455146360216,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwlpt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49fb91bd-1c6c-4dfb-af51-7f1604463b26,},Annotations:map[string]string{io.kubernetes.container.hash: dba96ca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64ad53c3587b5568b79502878d17b766cf54e7c07bcc3fef95758ce5918270c3,PodSandboxId:8a835dbc9646b109d083034ec8434f3d59a55bb87e020ad62206eeaa03be1fca,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722457421633891789,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-glw6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9252257a-3126-4945-8013-bbd3a4c9f820,},Annotations:map[string]string{io.kubernetes.container.hash: 4a7249b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e9d628cf7a5a1a13be57953e39388980cf20ecbb7d664dc6876fb4361aa3c1,PodSandboxId:ee745ccf6b1f15ef5701ee62a3ed93b59dcb2e4d5aeb23a16c40b9ea0cfe93a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722457421587768075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sh4fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34113636-7979-4b54-bf2a-37c49178450d,},Annotations:map[string]string{io.kubernetes.container.hash: 63f8e4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97d9921b6b5b5e2e94ac6d8334ef0a99462ed8bb280be53483e354fc701ed19,PodSandboxId:9f1923274a45e66fdb7dba3d2b0ea4762ccdd3142a59f5b923bc4b6eee444280,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722457421465473809,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 360a1917-a5a8-4093-b355-c774cccc8548,},An
notations:map[string]string{io.kubernetes.container.hash: b389d224,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd55608813a0f0c36d2a388e76d2741aea9db7517c652637946f5d9ad76acd5,PodSandboxId:7775de6c551cfa697c443a1c10393f4987772634e09d1fe63430f301d84e5fc9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722457421417028604,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vcsv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474aef7b-6525-439f-baa8-801e799ea6a7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 699f1ad0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95e1203585db282d87e855e71382c41a4bb300ef267cff506afeb8117170c7b3,PodSandboxId:225fc91b7219deee45bab76b3fb7e7adf461f40fdaa6d410cda4871f1c90fc75,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722457416548041781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff917910c99be5ca87c83a0532756771,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:170e2ce2375b5d347dd27a7f6671e582c5e4f2eb1fa1be6c22012910ce5c5119,PodSandboxId:76048d11be0b39034995b7a3f5beb46372177f5072ed6a70450fdac83707a0da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722457416478564127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed18c908b1631740f056181e183d629b,},Annotations:map[string]string{io.kub
ernetes.container.hash: 3e59ec47,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ccb2a4daa9e2e76efe02e7d9f73767ad460acbd85dacbb0a3beacd058c19f85,PodSandboxId:a680464821f8e0e9fede1b3977123f951d9fb3c5ce53dec2842e5ce3272799ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722457416512645124,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293234958929e5b2f40fcf9fe89f059c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c155ec9b0f0ae3fae47802262dec33f8c36c8fd1727326b616cb03ec5e7c2f83,PodSandboxId:eaf65f3cefe47f366ebebdf1c1b7fd10b0193a8b4de60003ad3f5ed12ec7fcb8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722457416455729519,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b066168dce1fb13b29b0e5215f2e4c17,},Annotations:map[string]string{io.kubernetes.container.hash: 7c0abe50,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ccbc51911f88b2cea53f55b4e9226d72df1a15a63947dccb6900e21b71381fb,PodSandboxId:cb7c00fd54be8529819c7f6fbe71d0abd6baf362ef761c93c3fb16f926ae1a33,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722457402800817088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sh4fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34113636-7979-4b54-bf2a-37c49178450d,},Annotations:map[string]string{io.kubernetes.container.hash: 63f8e4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d04b8842076b61a34fd5b5ecfe3702a29f477a4e4f35542098783b88c33a82ca,PodSandboxId:2659ab64605639979224b37c4547b2024bc05913f45a9a7bb405ec83131ae9af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722457082072782326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwlpt,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49fb91bd-1c6c-4dfb-af51-7f1604463b26,},Annotations:map[string]string{io.kubernetes.container.hash: dba96ca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72e862487909d32a034d59c9ec722d2003e2ea4b858f2736fe642cce09f2c230,PodSandboxId:462cb7067877aab3a2ecfea2172d63ecd9b051871faa6d504453347a15e22619,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722457024342454654,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 360a1917-a5a8-4093-b355-c774cccc8548,},Annotations:map[string]string{io.kubernetes.container.hash: b389d224,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86082e9a17e18abeef414a96a2ce5e84ea762b3b9eae19e1e48e9a8b5d49804a,PodSandboxId:d53634fcc825315ef3f58ad427820c1422931f1314b749c2f36bf1d2a5d16d77,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722457012348726824,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-glw6d,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 9252257a-3126-4945-8013-bbd3a4c9f820,},Annotations:map[string]string{io.kubernetes.container.hash: 4a7249b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d7a4222195e163501ef8c970cf2272d4d92203c6b85fadf372dc530a5ff2761,PodSandboxId:bee3609a504470e74917d47a74616ca3798ef90df0b6d23171ae00239775d808,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722457008537165511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vcsv5,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 474aef7b-6525-439f-baa8-801e799ea6a7,},Annotations:map[string]string{io.kubernetes.container.hash: 699f1ad0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cb70dfa50e8e788940f8a1dc034720fb6ac7ad4b9ccbc7338f3428637dab8b9,PodSandboxId:ddd0df2e6857e8fb0ac2f5fb7b3deb0327e935ebf77ea225542a00732dc05300,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722456989271588300,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b066168dce1fb13b29b0e5215f2e4c
17,},Annotations:map[string]string{io.kubernetes.container.hash: 7c0abe50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25141b1279c4b01c415837cb60597cae930af0d465ad070502ff71a3e82b4afb,PodSandboxId:3027554ccce1d345e3b6c8beb43cbee5573a4675e09e728533c0e6c178a996f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722456989234785043,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293234958929e5b2f40fcf9fe89f059c,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd55ce3db2a7d9e442823271d3bbfa8562e77c7ae881497975e66ea7e6547a6d,PodSandboxId:e2ffd4f3dae22fb8ff47764ca6c9f49bad4926aa0fee04c879f049bd513c68e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722456989237785971,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed18c908b1631740f056181e183d629b,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 3e59ec47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d62542ea5da5222f8b762ce76723d43560cda4cc13ef73726c65608d6ef6521,PodSandboxId:7d22a0fb42285db96ac26c365a427ad06da26d5291b1544cece1e6dc093ab549,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722456989177786195,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff917910c99be5ca87c83a0532756771,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a44f335d-e84d-445d-91b6-09e84a70026d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:25:18 multinode-094885 crio[2971]: time="2024-07-31 20:25:18.509357624Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fc78ba31-042c-4590-a28a-3a58c0d36406 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:25:18 multinode-094885 crio[2971]: time="2024-07-31 20:25:18.509472680Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fc78ba31-042c-4590-a28a-3a58c0d36406 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:25:18 multinode-094885 crio[2971]: time="2024-07-31 20:25:18.511488872Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aefb3a27-bfc5-4f8f-9cfd-354c52669299 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:25:18 multinode-094885 crio[2971]: time="2024-07-31 20:25:18.511951739Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722457518511925835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aefb3a27-bfc5-4f8f-9cfd-354c52669299 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:25:18 multinode-094885 crio[2971]: time="2024-07-31 20:25:18.512677234Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2cf05855-68c1-492d-be2a-a6f83fea3d13 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:25:18 multinode-094885 crio[2971]: time="2024-07-31 20:25:18.512770276Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2cf05855-68c1-492d-be2a-a6f83fea3d13 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:25:18 multinode-094885 crio[2971]: time="2024-07-31 20:25:18.513160793Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:15436fdd715785b635301ef11a649bb91a95d21320efe544557f270eace6df3f,PodSandboxId:1906f71f375b03f83f43c1528754732e1bfab0e9bbbf79523cacefa9ae27f715,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722457455146360216,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwlpt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49fb91bd-1c6c-4dfb-af51-7f1604463b26,},Annotations:map[string]string{io.kubernetes.container.hash: dba96ca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64ad53c3587b5568b79502878d17b766cf54e7c07bcc3fef95758ce5918270c3,PodSandboxId:8a835dbc9646b109d083034ec8434f3d59a55bb87e020ad62206eeaa03be1fca,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722457421633891789,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-glw6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9252257a-3126-4945-8013-bbd3a4c9f820,},Annotations:map[string]string{io.kubernetes.container.hash: 4a7249b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e9d628cf7a5a1a13be57953e39388980cf20ecbb7d664dc6876fb4361aa3c1,PodSandboxId:ee745ccf6b1f15ef5701ee62a3ed93b59dcb2e4d5aeb23a16c40b9ea0cfe93a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722457421587768075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sh4fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34113636-7979-4b54-bf2a-37c49178450d,},Annotations:map[string]string{io.kubernetes.container.hash: 63f8e4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97d9921b6b5b5e2e94ac6d8334ef0a99462ed8bb280be53483e354fc701ed19,PodSandboxId:9f1923274a45e66fdb7dba3d2b0ea4762ccdd3142a59f5b923bc4b6eee444280,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722457421465473809,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 360a1917-a5a8-4093-b355-c774cccc8548,},An
notations:map[string]string{io.kubernetes.container.hash: b389d224,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd55608813a0f0c36d2a388e76d2741aea9db7517c652637946f5d9ad76acd5,PodSandboxId:7775de6c551cfa697c443a1c10393f4987772634e09d1fe63430f301d84e5fc9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722457421417028604,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vcsv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474aef7b-6525-439f-baa8-801e799ea6a7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 699f1ad0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95e1203585db282d87e855e71382c41a4bb300ef267cff506afeb8117170c7b3,PodSandboxId:225fc91b7219deee45bab76b3fb7e7adf461f40fdaa6d410cda4871f1c90fc75,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722457416548041781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff917910c99be5ca87c83a0532756771,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:170e2ce2375b5d347dd27a7f6671e582c5e4f2eb1fa1be6c22012910ce5c5119,PodSandboxId:76048d11be0b39034995b7a3f5beb46372177f5072ed6a70450fdac83707a0da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722457416478564127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed18c908b1631740f056181e183d629b,},Annotations:map[string]string{io.kub
ernetes.container.hash: 3e59ec47,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ccb2a4daa9e2e76efe02e7d9f73767ad460acbd85dacbb0a3beacd058c19f85,PodSandboxId:a680464821f8e0e9fede1b3977123f951d9fb3c5ce53dec2842e5ce3272799ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722457416512645124,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293234958929e5b2f40fcf9fe89f059c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c155ec9b0f0ae3fae47802262dec33f8c36c8fd1727326b616cb03ec5e7c2f83,PodSandboxId:eaf65f3cefe47f366ebebdf1c1b7fd10b0193a8b4de60003ad3f5ed12ec7fcb8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722457416455729519,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b066168dce1fb13b29b0e5215f2e4c17,},Annotations:map[string]string{io.kubernetes.container.hash: 7c0abe50,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ccbc51911f88b2cea53f55b4e9226d72df1a15a63947dccb6900e21b71381fb,PodSandboxId:cb7c00fd54be8529819c7f6fbe71d0abd6baf362ef761c93c3fb16f926ae1a33,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722457402800817088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sh4fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34113636-7979-4b54-bf2a-37c49178450d,},Annotations:map[string]string{io.kubernetes.container.hash: 63f8e4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d04b8842076b61a34fd5b5ecfe3702a29f477a4e4f35542098783b88c33a82ca,PodSandboxId:2659ab64605639979224b37c4547b2024bc05913f45a9a7bb405ec83131ae9af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722457082072782326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwlpt,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49fb91bd-1c6c-4dfb-af51-7f1604463b26,},Annotations:map[string]string{io.kubernetes.container.hash: dba96ca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72e862487909d32a034d59c9ec722d2003e2ea4b858f2736fe642cce09f2c230,PodSandboxId:462cb7067877aab3a2ecfea2172d63ecd9b051871faa6d504453347a15e22619,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722457024342454654,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 360a1917-a5a8-4093-b355-c774cccc8548,},Annotations:map[string]string{io.kubernetes.container.hash: b389d224,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86082e9a17e18abeef414a96a2ce5e84ea762b3b9eae19e1e48e9a8b5d49804a,PodSandboxId:d53634fcc825315ef3f58ad427820c1422931f1314b749c2f36bf1d2a5d16d77,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722457012348726824,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-glw6d,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 9252257a-3126-4945-8013-bbd3a4c9f820,},Annotations:map[string]string{io.kubernetes.container.hash: 4a7249b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d7a4222195e163501ef8c970cf2272d4d92203c6b85fadf372dc530a5ff2761,PodSandboxId:bee3609a504470e74917d47a74616ca3798ef90df0b6d23171ae00239775d808,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722457008537165511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vcsv5,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 474aef7b-6525-439f-baa8-801e799ea6a7,},Annotations:map[string]string{io.kubernetes.container.hash: 699f1ad0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cb70dfa50e8e788940f8a1dc034720fb6ac7ad4b9ccbc7338f3428637dab8b9,PodSandboxId:ddd0df2e6857e8fb0ac2f5fb7b3deb0327e935ebf77ea225542a00732dc05300,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722456989271588300,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b066168dce1fb13b29b0e5215f2e4c
17,},Annotations:map[string]string{io.kubernetes.container.hash: 7c0abe50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25141b1279c4b01c415837cb60597cae930af0d465ad070502ff71a3e82b4afb,PodSandboxId:3027554ccce1d345e3b6c8beb43cbee5573a4675e09e728533c0e6c178a996f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722456989234785043,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293234958929e5b2f40fcf9fe89f059c,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd55ce3db2a7d9e442823271d3bbfa8562e77c7ae881497975e66ea7e6547a6d,PodSandboxId:e2ffd4f3dae22fb8ff47764ca6c9f49bad4926aa0fee04c879f049bd513c68e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722456989237785971,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed18c908b1631740f056181e183d629b,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 3e59ec47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d62542ea5da5222f8b762ce76723d43560cda4cc13ef73726c65608d6ef6521,PodSandboxId:7d22a0fb42285db96ac26c365a427ad06da26d5291b1544cece1e6dc093ab549,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722456989177786195,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff917910c99be5ca87c83a0532756771,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2cf05855-68c1-492d-be2a-a6f83fea3d13 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	15436fdd71578       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   1906f71f375b0       busybox-fc5497c4f-wwlpt
	64ad53c3587b5       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      About a minute ago   Running             kindnet-cni               1                   8a835dbc9646b       kindnet-glw6d
	a7e9d628cf7a5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   2                   ee745ccf6b1f1       coredns-7db6d8ff4d-sh4fx
	a97d9921b6b5b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   9f1923274a45e       storage-provisioner
	4bd55608813a0       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      About a minute ago   Running             kube-proxy                1                   7775de6c551cf       kube-proxy-vcsv5
	95e1203585db2       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   1                   225fc91b7219d       kube-controller-manager-multinode-094885
	7ccb2a4daa9e2       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      About a minute ago   Running             kube-scheduler            1                   a680464821f8e       kube-scheduler-multinode-094885
	170e2ce2375b5       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            1                   76048d11be0b3       kube-apiserver-multinode-094885
	c155ec9b0f0ae       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   eaf65f3cefe47       etcd-multinode-094885
	7ccbc51911f88       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Exited              coredns                   1                   cb7c00fd54be8       coredns-7db6d8ff4d-sh4fx
	d04b8842076b6       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   2659ab6460563       busybox-fc5497c4f-wwlpt
	72e862487909d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   462cb7067877a       storage-provisioner
	86082e9a17e18       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    8 minutes ago        Exited              kindnet-cni               0                   d53634fcc8253       kindnet-glw6d
	4d7a4222195e1       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago        Exited              kube-proxy                0                   bee3609a50447       kube-proxy-vcsv5
	3cb70dfa50e8e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   ddd0df2e6857e       etcd-multinode-094885
	bd55ce3db2a7d       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago        Exited              kube-apiserver            0                   e2ffd4f3dae22       kube-apiserver-multinode-094885
	25141b1279c4b       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago        Exited              kube-scheduler            0                   3027554ccce1d       kube-scheduler-multinode-094885
	1d62542ea5da5       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago        Exited              kube-controller-manager   0                   7d22a0fb42285       kube-controller-manager-multinode-094885
	
	
	==> coredns [7ccbc51911f88b2cea53f55b4e9226d72df1a15a63947dccb6900e21b71381fb] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:38098 - 12928 "HINFO IN 8460983658922911469.4892911557547791121. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015202375s
	
	
	==> coredns [a7e9d628cf7a5a1a13be57953e39388980cf20ecbb7d664dc6876fb4361aa3c1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:60945 - 38704 "HINFO IN 8032362761859454990.3906396736182584970. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01329722s
	
	
	==> describe nodes <==
	Name:               multinode-094885
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-094885
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825
	                    minikube.k8s.io/name=multinode-094885
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T20_16_35_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 20:16:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-094885
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 20:25:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 20:23:40 +0000   Wed, 31 Jul 2024 20:16:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 20:23:40 +0000   Wed, 31 Jul 2024 20:16:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 20:23:40 +0000   Wed, 31 Jul 2024 20:16:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 20:23:40 +0000   Wed, 31 Jul 2024 20:17:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.193
	  Hostname:    multinode-094885
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee7f996ff9584406978b08296319e67b
	  System UUID:                ee7f996f-f958-4406-978b-08296319e67b
	  Boot ID:                    2e0f464f-999d-45cb-8453-39f654e528b7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wwlpt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m19s
	  kube-system                 coredns-7db6d8ff4d-sh4fx                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m30s
	  kube-system                 etcd-multinode-094885                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m44s
	  kube-system                 kindnet-glw6d                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m31s
	  kube-system                 kube-apiserver-multinode-094885             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m45s
	  kube-system                 kube-controller-manager-multinode-094885    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m44s
	  kube-system                 kube-proxy-vcsv5                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m31s
	  kube-system                 kube-scheduler-multinode-094885             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m44s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m29s                kube-proxy       
	  Normal  Starting                 96s                  kube-proxy       
	  Normal  NodeHasSufficientPID     8m44s                kubelet          Node multinode-094885 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m44s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m44s                kubelet          Node multinode-094885 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m44s                kubelet          Node multinode-094885 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m44s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m31s                node-controller  Node multinode-094885 event: Registered Node multinode-094885 in Controller
	  Normal  NodeReady                8m15s                kubelet          Node multinode-094885 status is now: NodeReady
	  Normal  Starting                 103s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  103s (x8 over 103s)  kubelet          Node multinode-094885 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s (x8 over 103s)  kubelet          Node multinode-094885 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s (x7 over 103s)  kubelet          Node multinode-094885 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  103s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           85s                  node-controller  Node multinode-094885 event: Registered Node multinode-094885 in Controller
	
	
	Name:               multinode-094885-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-094885-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825
	                    minikube.k8s.io/name=multinode-094885
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T20_24_18_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 20:24:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-094885-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 20:25:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 20:24:48 +0000   Wed, 31 Jul 2024 20:24:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 20:24:48 +0000   Wed, 31 Jul 2024 20:24:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 20:24:48 +0000   Wed, 31 Jul 2024 20:24:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 20:24:48 +0000   Wed, 31 Jul 2024 20:24:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.211
	  Hostname:    multinode-094885-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8471c475397b4e6fabcd63873a1dced7
	  System UUID:                8471c475-397b-4e6f-abcd-63873a1dced7
	  Boot ID:                    c6d77d04-2134-42c5-a8e5-3a3f6010ec60
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pmhlm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kindnet-w7fnj              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m42s
	  kube-system                 kube-proxy-g62ct           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m37s                  kube-proxy  
	  Normal  Starting                 57s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m42s (x2 over 7m43s)  kubelet     Node multinode-094885-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m42s (x2 over 7m43s)  kubelet     Node multinode-094885-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m42s (x2 over 7m43s)  kubelet     Node multinode-094885-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m42s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m22s                  kubelet     Node multinode-094885-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  60s (x2 over 60s)      kubelet     Node multinode-094885-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x2 over 60s)      kubelet     Node multinode-094885-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x2 over 60s)      kubelet     Node multinode-094885-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  60s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                42s                    kubelet     Node multinode-094885-m02 status is now: NodeReady
	
	
	Name:               multinode-094885-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-094885-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825
	                    minikube.k8s.io/name=multinode-094885
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T20_24_56_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 20:24:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-094885-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 20:25:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 20:25:15 +0000   Wed, 31 Jul 2024 20:24:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 20:25:15 +0000   Wed, 31 Jul 2024 20:24:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 20:25:15 +0000   Wed, 31 Jul 2024 20:24:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 20:25:15 +0000   Wed, 31 Jul 2024 20:25:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.53
	  Hostname:    multinode-094885-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5198f3f1c6e14596bccabe142ea7775c
	  System UUID:                5198f3f1-c6e1-4596-bcca-be142ea7775c
	  Boot ID:                    e03f9448-1fda-447e-a659-f5764f7bf65a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-68dx5       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m44s
	  kube-system                 kube-proxy-mpj87    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m38s                  kube-proxy  
	  Normal  Starting                 18s                    kube-proxy  
	  Normal  Starting                 5m48s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m44s (x2 over 6m44s)  kubelet     Node multinode-094885-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m44s (x2 over 6m44s)  kubelet     Node multinode-094885-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m44s (x2 over 6m44s)  kubelet     Node multinode-094885-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m44s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m23s                  kubelet     Node multinode-094885-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m53s (x2 over 5m53s)  kubelet     Node multinode-094885-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m53s (x2 over 5m53s)  kubelet     Node multinode-094885-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m53s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m53s (x2 over 5m53s)  kubelet     Node multinode-094885-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 5m53s                  kubelet     Starting kubelet.
	  Normal  NodeReady                5m33s                  kubelet     Node multinode-094885-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  23s (x2 over 23s)      kubelet     Node multinode-094885-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x2 over 23s)      kubelet     Node multinode-094885-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x2 over 23s)      kubelet     Node multinode-094885-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-094885-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.058164] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.195356] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.122081] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.285227] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.220028] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +4.174735] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +0.054729] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.990974] systemd-fstab-generator[1277]: Ignoring "noauto" option for root device
	[  +0.086164] kauditd_printk_skb: 69 callbacks suppressed
	[ +13.583485] systemd-fstab-generator[1467]: Ignoring "noauto" option for root device
	[  +0.110133] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.376545] kauditd_printk_skb: 56 callbacks suppressed
	[Jul31 20:17] kauditd_printk_skb: 12 callbacks suppressed
	[Jul31 20:23] systemd-fstab-generator[2781]: Ignoring "noauto" option for root device
	[  +0.134393] systemd-fstab-generator[2793]: Ignoring "noauto" option for root device
	[  +0.174483] systemd-fstab-generator[2807]: Ignoring "noauto" option for root device
	[  +0.132118] systemd-fstab-generator[2819]: Ignoring "noauto" option for root device
	[  +0.382515] systemd-fstab-generator[2939]: Ignoring "noauto" option for root device
	[ +10.722573] systemd-fstab-generator[3079]: Ignoring "noauto" option for root device
	[  +0.083948] kauditd_printk_skb: 110 callbacks suppressed
	[  +1.882554] systemd-fstab-generator[3219]: Ignoring "noauto" option for root device
	[  +5.734902] kauditd_printk_skb: 76 callbacks suppressed
	[ +12.227317] systemd-fstab-generator[4046]: Ignoring "noauto" option for root device
	[  +0.110520] kauditd_printk_skb: 32 callbacks suppressed
	[Jul31 20:24] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [3cb70dfa50e8e788940f8a1dc034720fb6ac7ad4b9ccbc7338f3428637dab8b9] <==
	{"level":"info","ts":"2024-07-31T20:16:29.797627Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T20:16:29.797667Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T20:16:29.801344Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T20:16:29.801438Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T20:16:29.795805Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.193:2379"}
	{"level":"warn","ts":"2024-07-31T20:17:36.229742Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.33051ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10517753453783015819 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-094885-m02.17e7659101756127\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-094885-m02.17e7659101756127\" value_size:646 lease:1294381416928240009 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-31T20:17:36.230194Z","caller":"traceutil/trace.go:171","msg":"trace[592286302] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"239.496548ms","start":"2024-07-31T20:17:35.990664Z","end":"2024-07-31T20:17:36.230161Z","steps":["trace[592286302] 'process raft request'  (duration: 90.075797ms)","trace[592286302] 'compare'  (duration: 148.08338ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T20:17:36.230602Z","caller":"traceutil/trace.go:171","msg":"trace[777462368] transaction","detail":"{read_only:false; response_revision:440; number_of_response:1; }","duration":"203.82761ms","start":"2024-07-31T20:17:36.026725Z","end":"2024-07-31T20:17:36.230553Z","steps":["trace[777462368] 'process raft request'  (duration: 203.361056ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T20:18:34.511753Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.068765ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10517753453783016285 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-094885-m03.17e7659e94e51b10\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-094885-m03.17e7659e94e51b10\" value_size:642 lease:1294381416928240009 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-31T20:18:34.512151Z","caller":"traceutil/trace.go:171","msg":"trace[1625623834] transaction","detail":"{read_only:false; response_revision:577; number_of_response:1; }","duration":"154.487095ms","start":"2024-07-31T20:18:34.357586Z","end":"2024-07-31T20:18:34.512073Z","steps":["trace[1625623834] 'process raft request'  (duration: 154.355749ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T20:18:34.51241Z","caller":"traceutil/trace.go:171","msg":"trace[447460658] transaction","detail":"{read_only:false; response_revision:576; number_of_response:1; }","duration":"220.023156ms","start":"2024-07-31T20:18:34.292373Z","end":"2024-07-31T20:18:34.512396Z","steps":["trace[447460658] 'process raft request'  (duration: 61.17617ms)","trace[447460658] 'compare'  (duration: 157.969169ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T20:18:37.748963Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"260.044594ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10517753453783016351 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-094885-m03.17e7659f4fa51335\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-094885-m03.17e7659f4fa51335\" value_size:629 lease:1294381416928240535 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-31T20:18:37.749385Z","caller":"traceutil/trace.go:171","msg":"trace[1938264959] transaction","detail":"{read_only:false; response_revision:609; number_of_response:1; }","duration":"344.121957ms","start":"2024-07-31T20:18:37.405095Z","end":"2024-07-31T20:18:37.749217Z","steps":["trace[1938264959] 'process raft request'  (duration: 83.768996ms)","trace[1938264959] 'compare'  (duration: 259.661316ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T20:18:37.749498Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T20:18:37.405079Z","time spent":"344.379339ms","remote":"127.0.0.1:46256","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":709,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/multinode-094885-m03.17e7659f4fa51335\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-094885-m03.17e7659f4fa51335\" value_size:629 lease:1294381416928240535 >> failure:<>"}
	{"level":"info","ts":"2024-07-31T20:18:37.749817Z","caller":"traceutil/trace.go:171","msg":"trace[982007433] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"254.015547ms","start":"2024-07-31T20:18:37.495794Z","end":"2024-07-31T20:18:37.74981Z","steps":["trace[982007433] 'process raft request'  (duration: 253.845164ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T20:21:50.685385Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-31T20:21:50.685547Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-094885","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.193:2380"],"advertise-client-urls":["https://192.168.39.193:2379"]}
	{"level":"warn","ts":"2024-07-31T20:21:50.685692Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T20:21:50.68578Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T20:21:50.772605Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.193:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T20:21:50.772696Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.193:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-31T20:21:50.772764Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"97ba5874d4d591f6","current-leader-member-id":"97ba5874d4d591f6"}
	{"level":"info","ts":"2024-07-31T20:21:50.775469Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.193:2380"}
	{"level":"info","ts":"2024-07-31T20:21:50.775688Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.193:2380"}
	{"level":"info","ts":"2024-07-31T20:21:50.775722Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-094885","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.193:2380"],"advertise-client-urls":["https://192.168.39.193:2379"]}
	
	
	==> etcd [c155ec9b0f0ae3fae47802262dec33f8c36c8fd1727326b616cb03ec5e7c2f83] <==
	{"level":"info","ts":"2024-07-31T20:23:36.935141Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T20:23:36.93754Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"97ba5874d4d591f6","initial-advertise-peer-urls":["https://192.168.39.193:2380"],"listen-peer-urls":["https://192.168.39.193:2380"],"advertise-client-urls":["https://192.168.39.193:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.193:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T20:23:36.937716Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T20:23:36.937833Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.193:2380"}
	{"level":"info","ts":"2024-07-31T20:23:36.937842Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.193:2380"}
	{"level":"info","ts":"2024-07-31T20:23:36.928549Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 switched to configuration voters=(10933148304205517302)"}
	{"level":"info","ts":"2024-07-31T20:23:36.939137Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9afeb12ac4c1a90a","local-member-id":"97ba5874d4d591f6","added-peer-id":"97ba5874d4d591f6","added-peer-peer-urls":["https://192.168.39.193:2380"]}
	{"level":"info","ts":"2024-07-31T20:23:36.939336Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9afeb12ac4c1a90a","local-member-id":"97ba5874d4d591f6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T20:23:36.939373Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T20:23:36.939001Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T20:23:36.957711Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T20:23:38.779631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T20:23:38.779748Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T20:23:38.779808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 received MsgPreVoteResp from 97ba5874d4d591f6 at term 2"}
	{"level":"info","ts":"2024-07-31T20:23:38.779838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T20:23:38.779863Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 received MsgVoteResp from 97ba5874d4d591f6 at term 3"}
	{"level":"info","ts":"2024-07-31T20:23:38.779889Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 became leader at term 3"}
	{"level":"info","ts":"2024-07-31T20:23:38.779919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 97ba5874d4d591f6 elected leader 97ba5874d4d591f6 at term 3"}
	{"level":"info","ts":"2024-07-31T20:23:38.785119Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"97ba5874d4d591f6","local-member-attributes":"{Name:multinode-094885 ClientURLs:[https://192.168.39.193:2379]}","request-path":"/0/members/97ba5874d4d591f6/attributes","cluster-id":"9afeb12ac4c1a90a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T20:23:38.785398Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T20:23:38.785447Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T20:23:38.785879Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T20:23:38.785925Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T20:23:38.787625Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T20:23:38.788064Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.193:2379"}
	
	
	==> kernel <==
	 20:25:19 up 9 min,  0 users,  load average: 0.56, 0.35, 0.17
	Linux multinode-094885 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [64ad53c3587b5568b79502878d17b766cf54e7c07bcc3fef95758ce5918270c3] <==
	I0731 20:24:32.665020       1 main.go:322] Node multinode-094885-m02 has CIDR [10.244.1.0/24] 
	I0731 20:24:42.661465       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0731 20:24:42.661537       1 main.go:322] Node multinode-094885-m03 has CIDR [10.244.3.0/24] 
	I0731 20:24:42.661717       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 20:24:42.661752       1 main.go:299] handling current node
	I0731 20:24:42.661767       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0731 20:24:42.661774       1 main.go:322] Node multinode-094885-m02 has CIDR [10.244.1.0/24] 
	I0731 20:24:52.664454       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0731 20:24:52.664584       1 main.go:322] Node multinode-094885-m03 has CIDR [10.244.3.0/24] 
	I0731 20:24:52.664800       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 20:24:52.664840       1 main.go:299] handling current node
	I0731 20:24:52.664964       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0731 20:24:52.665006       1 main.go:322] Node multinode-094885-m02 has CIDR [10.244.1.0/24] 
	I0731 20:25:02.660781       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 20:25:02.661012       1 main.go:299] handling current node
	I0731 20:25:02.661060       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0731 20:25:02.661081       1 main.go:322] Node multinode-094885-m02 has CIDR [10.244.1.0/24] 
	I0731 20:25:02.661348       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0731 20:25:02.661384       1 main.go:322] Node multinode-094885-m03 has CIDR [10.244.2.0/24] 
	I0731 20:25:12.662967       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 20:25:12.663153       1 main.go:299] handling current node
	I0731 20:25:12.663186       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0731 20:25:12.663209       1 main.go:322] Node multinode-094885-m02 has CIDR [10.244.1.0/24] 
	I0731 20:25:12.663556       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0731 20:25:12.663592       1 main.go:322] Node multinode-094885-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [86082e9a17e18abeef414a96a2ce5e84ea762b3b9eae19e1e48e9a8b5d49804a] <==
	I0731 20:21:03.462607       1 main.go:299] handling current node
	I0731 20:21:13.456312       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0731 20:21:13.456525       1 main.go:322] Node multinode-094885-m03 has CIDR [10.244.3.0/24] 
	I0731 20:21:13.456729       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 20:21:13.456760       1 main.go:299] handling current node
	I0731 20:21:13.456787       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0731 20:21:13.456812       1 main.go:322] Node multinode-094885-m02 has CIDR [10.244.1.0/24] 
	I0731 20:21:23.462076       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 20:21:23.462162       1 main.go:299] handling current node
	I0731 20:21:23.462192       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0731 20:21:23.462198       1 main.go:322] Node multinode-094885-m02 has CIDR [10.244.1.0/24] 
	I0731 20:21:23.462392       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0731 20:21:23.462418       1 main.go:322] Node multinode-094885-m03 has CIDR [10.244.3.0/24] 
	I0731 20:21:33.455660       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 20:21:33.455733       1 main.go:299] handling current node
	I0731 20:21:33.455749       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0731 20:21:33.455755       1 main.go:322] Node multinode-094885-m02 has CIDR [10.244.1.0/24] 
	I0731 20:21:33.455889       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0731 20:21:33.455912       1 main.go:322] Node multinode-094885-m03 has CIDR [10.244.3.0/24] 
	I0731 20:21:43.464371       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0731 20:21:43.464514       1 main.go:322] Node multinode-094885-m02 has CIDR [10.244.1.0/24] 
	I0731 20:21:43.464712       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0731 20:21:43.464754       1 main.go:322] Node multinode-094885-m03 has CIDR [10.244.3.0/24] 
	I0731 20:21:43.464840       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 20:21:43.464860       1 main.go:299] handling current node
	
	
	==> kube-apiserver [170e2ce2375b5d347dd27a7f6671e582c5e4f2eb1fa1be6c22012910ce5c5119] <==
	I0731 20:23:40.044919       1 establishing_controller.go:76] Starting EstablishingController
	I0731 20:23:40.044947       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0731 20:23:40.044990       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0731 20:23:40.045010       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0731 20:23:40.094753       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 20:23:40.094953       1 shared_informer.go:320] Caches are synced for configmaps
	I0731 20:23:40.095080       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 20:23:40.095559       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0731 20:23:40.095607       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0731 20:23:40.099779       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0731 20:23:40.102440       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	E0731 20:23:40.111002       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0731 20:23:40.117541       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 20:23:40.121945       1 cache.go:39] Caches are synced for autoregister controller
	I0731 20:23:40.146324       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 20:23:40.146402       1 policy_source.go:224] refreshing policies
	I0731 20:23:40.163569       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 20:23:41.007002       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 20:23:42.353986       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 20:23:42.481931       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 20:23:42.495894       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 20:23:42.580703       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 20:23:42.587063       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 20:23:53.388193       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 20:23:53.688719       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [bd55ce3db2a7d9e442823271d3bbfa8562e77c7ae881497975e66ea7e6547a6d] <==
	E0731 20:18:04.709119       1 conn.go:339] Error on socket receive: read tcp 192.168.39.193:8443->192.168.39.1:53440: use of closed network connection
	E0731 20:18:04.876811       1 conn.go:339] Error on socket receive: read tcp 192.168.39.193:8443->192.168.39.1:53456: use of closed network connection
	E0731 20:18:05.050967       1 conn.go:339] Error on socket receive: read tcp 192.168.39.193:8443->192.168.39.1:53476: use of closed network connection
	E0731 20:18:05.226571       1 conn.go:339] Error on socket receive: read tcp 192.168.39.193:8443->192.168.39.1:53486: use of closed network connection
	I0731 20:21:50.690108       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0731 20:21:50.703975       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0731 20:21:50.705585       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0731 20:21:50.706628       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 20:21:50.707822       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 20:21:50.708013       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 20:21:50.708048       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 20:21:50.708184       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 20:21:50.708215       1 logging.go:59] [core] [Channel #6 SubChannel #7] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 20:21:50.709282       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 20:21:50.709327       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 20:21:50.709650       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 20:21:50.709849       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 20:21:50.709949       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 20:21:50.710160       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 20:21:50.710343       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 20:21:50.710451       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 20:21:50.710557       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 20:21:50.710864       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 20:21:50.712088       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0731 20:21:50.712435       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	
	
	==> kube-controller-manager [1d62542ea5da5222f8b762ce76723d43560cda4cc13ef73726c65608d6ef6521] <==
	I0731 20:17:36.234914       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-094885-m02\" does not exist"
	I0731 20:17:36.248769       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-094885-m02" podCIDRs=["10.244.1.0/24"]
	I0731 20:17:37.378473       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-094885-m02"
	I0731 20:17:56.867194       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-094885-m02"
	I0731 20:17:59.072287       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.152173ms"
	I0731 20:17:59.095540       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.052525ms"
	I0731 20:17:59.095648       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.227µs"
	I0731 20:17:59.095713       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.043µs"
	I0731 20:18:02.653226       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.674767ms"
	I0731 20:18:02.653725       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.07µs"
	I0731 20:18:03.126879       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.210578ms"
	I0731 20:18:03.127026       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.757µs"
	I0731 20:18:34.519580       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-094885-m02"
	I0731 20:18:34.519958       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-094885-m03\" does not exist"
	I0731 20:18:34.536858       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-094885-m03" podCIDRs=["10.244.2.0/24"]
	I0731 20:18:37.403056       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-094885-m03"
	I0731 20:18:55.129426       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-094885-m02"
	I0731 20:19:24.360520       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-094885-m02"
	I0731 20:19:25.569329       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-094885-m03\" does not exist"
	I0731 20:19:25.569573       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-094885-m02"
	I0731 20:19:25.586530       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-094885-m03" podCIDRs=["10.244.3.0/24"]
	I0731 20:19:45.141447       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-094885-m02"
	I0731 20:20:27.464490       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-094885-m02"
	I0731 20:20:32.554760       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.191405ms"
	I0731 20:20:32.555475       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.059µs"
	
	
	==> kube-controller-manager [95e1203585db282d87e855e71382c41a4bb300ef267cff506afeb8117170c7b3] <==
	I0731 20:23:54.084068       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 20:23:54.084156       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0731 20:24:13.579481       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.181103ms"
	I0731 20:24:13.594371       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.003253ms"
	I0731 20:24:13.594476       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.673µs"
	I0731 20:24:13.594581       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.402µs"
	I0731 20:24:18.109536       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-094885-m02\" does not exist"
	I0731 20:24:18.131785       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-094885-m02" podCIDRs=["10.244.1.0/24"]
	I0731 20:24:19.049534       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.809µs"
	I0731 20:24:19.064344       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.04µs"
	I0731 20:24:19.073774       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.207µs"
	I0731 20:24:19.088173       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.493µs"
	I0731 20:24:19.095997       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="95.904µs"
	I0731 20:24:19.100814       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.613µs"
	I0731 20:24:23.650146       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.903µs"
	I0731 20:24:36.618847       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-094885-m02"
	I0731 20:24:36.639664       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="86.239µs"
	I0731 20:24:36.655381       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.418µs"
	I0731 20:24:40.452863       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.03308ms"
	I0731 20:24:40.453085       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.991µs"
	I0731 20:24:54.818891       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-094885-m02"
	I0731 20:24:55.913882       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-094885-m03\" does not exist"
	I0731 20:24:55.915681       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-094885-m02"
	I0731 20:24:55.938389       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-094885-m03" podCIDRs=["10.244.2.0/24"]
	I0731 20:25:15.656525       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-094885-m02"
	
	
	==> kube-proxy [4bd55608813a0f0c36d2a388e76d2741aea9db7517c652637946f5d9ad76acd5] <==
	I0731 20:23:41.710109       1 server_linux.go:69] "Using iptables proxy"
	I0731 20:23:41.737657       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.193"]
	I0731 20:23:41.804156       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 20:23:41.804305       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 20:23:41.804325       1 server_linux.go:165] "Using iptables Proxier"
	I0731 20:23:41.807058       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 20:23:41.807599       1 server.go:872] "Version info" version="v1.30.3"
	I0731 20:23:41.807637       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 20:23:41.809712       1 config.go:192] "Starting service config controller"
	I0731 20:23:41.809744       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 20:23:41.809772       1 config.go:101] "Starting endpoint slice config controller"
	I0731 20:23:41.809776       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 20:23:41.810184       1 config.go:319] "Starting node config controller"
	I0731 20:23:41.810225       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 20:23:41.910811       1 shared_informer.go:320] Caches are synced for node config
	I0731 20:23:41.910889       1 shared_informer.go:320] Caches are synced for service config
	I0731 20:23:41.910922       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [4d7a4222195e163501ef8c970cf2272d4d92203c6b85fadf372dc530a5ff2761] <==
	I0731 20:16:48.835290       1 server_linux.go:69] "Using iptables proxy"
	I0731 20:16:48.850192       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.193"]
	I0731 20:16:48.889489       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 20:16:48.889551       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 20:16:48.889567       1 server_linux.go:165] "Using iptables Proxier"
	I0731 20:16:48.892347       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 20:16:48.892598       1 server.go:872] "Version info" version="v1.30.3"
	I0731 20:16:48.892628       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 20:16:48.895350       1 config.go:192] "Starting service config controller"
	I0731 20:16:48.895592       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 20:16:48.895923       1 config.go:101] "Starting endpoint slice config controller"
	I0731 20:16:48.895932       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 20:16:48.898421       1 config.go:319] "Starting node config controller"
	I0731 20:16:48.898446       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 20:16:48.996094       1 shared_informer.go:320] Caches are synced for service config
	I0731 20:16:48.996094       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 20:16:48.998704       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [25141b1279c4b01c415837cb60597cae930af0d465ad070502ff71a3e82b4afb] <==
	E0731 20:16:31.735906       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 20:16:32.547604       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 20:16:32.547738       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 20:16:32.583434       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 20:16:32.583527       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 20:16:32.613467       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 20:16:32.613557       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 20:16:32.657763       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 20:16:32.657792       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 20:16:32.659827       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 20:16:32.659877       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 20:16:32.770205       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 20:16:32.770315       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 20:16:32.859076       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 20:16:32.859140       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 20:16:32.921452       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 20:16:32.921499       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 20:16:32.941003       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 20:16:32.941151       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 20:16:33.061288       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 20:16:33.061336       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 20:16:33.064549       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 20:16:33.064624       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0731 20:16:34.330734       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0731 20:21:50.687099       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [7ccb2a4daa9e2e76efe02e7d9f73767ad460acbd85dacbb0a3beacd058c19f85] <==
	I0731 20:23:37.605960       1 serving.go:380] Generated self-signed cert in-memory
	I0731 20:23:40.083371       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0731 20:23:40.083430       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 20:23:40.087125       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 20:23:40.087211       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0731 20:23:40.087218       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0731 20:23:40.087300       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 20:23:40.090607       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 20:23:40.090638       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 20:23:40.090653       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0731 20:23:40.090659       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0731 20:23:40.188148       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0731 20:23:40.191661       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0731 20:23:40.191720       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 20:23:36 multinode-094885 kubelet[3226]: E0731 20:23:36.520886    3226 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.193:8443: connect: connection refused" node="multinode-094885"
	Jul 31 20:23:37 multinode-094885 kubelet[3226]: I0731 20:23:37.322866    3226 kubelet_node_status.go:73] "Attempting to register node" node="multinode-094885"
	Jul 31 20:23:40 multinode-094885 kubelet[3226]: I0731 20:23:40.162104    3226 kubelet_node_status.go:112] "Node was previously registered" node="multinode-094885"
	Jul 31 20:23:40 multinode-094885 kubelet[3226]: I0731 20:23:40.162526    3226 kubelet_node_status.go:76] "Successfully registered node" node="multinode-094885"
	Jul 31 20:23:40 multinode-094885 kubelet[3226]: I0731 20:23:40.164010    3226 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 31 20:23:40 multinode-094885 kubelet[3226]: I0731 20:23:40.164956    3226 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 31 20:23:40 multinode-094885 kubelet[3226]: I0731 20:23:40.796691    3226 apiserver.go:52] "Watching apiserver"
	Jul 31 20:23:40 multinode-094885 kubelet[3226]: I0731 20:23:40.801284    3226 topology_manager.go:215] "Topology Admit Handler" podUID="474aef7b-6525-439f-baa8-801e799ea6a7" podNamespace="kube-system" podName="kube-proxy-vcsv5"
	Jul 31 20:23:40 multinode-094885 kubelet[3226]: I0731 20:23:40.801424    3226 topology_manager.go:215] "Topology Admit Handler" podUID="9252257a-3126-4945-8013-bbd3a4c9f820" podNamespace="kube-system" podName="kindnet-glw6d"
	Jul 31 20:23:40 multinode-094885 kubelet[3226]: I0731 20:23:40.801466    3226 topology_manager.go:215] "Topology Admit Handler" podUID="360a1917-a5a8-4093-b355-c774cccc8548" podNamespace="kube-system" podName="storage-provisioner"
	Jul 31 20:23:40 multinode-094885 kubelet[3226]: I0731 20:23:40.801501    3226 topology_manager.go:215] "Topology Admit Handler" podUID="34113636-7979-4b54-bf2a-37c49178450d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-sh4fx"
	Jul 31 20:23:40 multinode-094885 kubelet[3226]: I0731 20:23:40.801552    3226 topology_manager.go:215] "Topology Admit Handler" podUID="49fb91bd-1c6c-4dfb-af51-7f1604463b26" podNamespace="default" podName="busybox-fc5497c4f-wwlpt"
	Jul 31 20:23:40 multinode-094885 kubelet[3226]: I0731 20:23:40.814812    3226 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 31 20:23:40 multinode-094885 kubelet[3226]: I0731 20:23:40.866746    3226 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/474aef7b-6525-439f-baa8-801e799ea6a7-xtables-lock\") pod \"kube-proxy-vcsv5\" (UID: \"474aef7b-6525-439f-baa8-801e799ea6a7\") " pod="kube-system/kube-proxy-vcsv5"
	Jul 31 20:23:40 multinode-094885 kubelet[3226]: I0731 20:23:40.866891    3226 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/474aef7b-6525-439f-baa8-801e799ea6a7-lib-modules\") pod \"kube-proxy-vcsv5\" (UID: \"474aef7b-6525-439f-baa8-801e799ea6a7\") " pod="kube-system/kube-proxy-vcsv5"
	Jul 31 20:23:40 multinode-094885 kubelet[3226]: I0731 20:23:40.866991    3226 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9252257a-3126-4945-8013-bbd3a4c9f820-xtables-lock\") pod \"kindnet-glw6d\" (UID: \"9252257a-3126-4945-8013-bbd3a4c9f820\") " pod="kube-system/kindnet-glw6d"
	Jul 31 20:23:40 multinode-094885 kubelet[3226]: I0731 20:23:40.867087    3226 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9252257a-3126-4945-8013-bbd3a4c9f820-lib-modules\") pod \"kindnet-glw6d\" (UID: \"9252257a-3126-4945-8013-bbd3a4c9f820\") " pod="kube-system/kindnet-glw6d"
	Jul 31 20:23:40 multinode-094885 kubelet[3226]: I0731 20:23:40.867191    3226 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9252257a-3126-4945-8013-bbd3a4c9f820-cni-cfg\") pod \"kindnet-glw6d\" (UID: \"9252257a-3126-4945-8013-bbd3a4c9f820\") " pod="kube-system/kindnet-glw6d"
	Jul 31 20:23:40 multinode-094885 kubelet[3226]: I0731 20:23:40.867742    3226 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/360a1917-a5a8-4093-b355-c774cccc8548-tmp\") pod \"storage-provisioner\" (UID: \"360a1917-a5a8-4093-b355-c774cccc8548\") " pod="kube-system/storage-provisioner"
	Jul 31 20:23:43 multinode-094885 kubelet[3226]: I0731 20:23:43.478173    3226 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 31 20:24:35 multinode-094885 kubelet[3226]: E0731 20:24:35.870862    3226 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 20:24:35 multinode-094885 kubelet[3226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 20:24:35 multinode-094885 kubelet[3226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 20:24:35 multinode-094885 kubelet[3226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 20:24:35 multinode-094885 kubelet[3226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 20:25:18.109674  159809 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19355-121704/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-094885 -n multinode-094885
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-094885 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (332.04s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 stop
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-094885 stop: exit status 82 (2m0.464094656s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-094885-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-094885 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 status
E0731 20:27:34.577548  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-094885 status: exit status 3 (18.758149675s)

                                                
                                                
-- stdout --
	multinode-094885
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-094885-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 20:27:41.285741  160466 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.211:22: connect: no route to host
	E0731 20:27:41.285792  160466 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.211:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-094885 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-094885 -n multinode-094885
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-094885 logs -n 25: (1.417447058s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-094885 ssh -n                                                                 | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | multinode-094885-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-094885 cp multinode-094885-m02:/home/docker/cp-test.txt                       | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | multinode-094885:/home/docker/cp-test_multinode-094885-m02_multinode-094885.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-094885 ssh -n                                                                 | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | multinode-094885-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-094885 ssh -n multinode-094885 sudo cat                                       | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | /home/docker/cp-test_multinode-094885-m02_multinode-094885.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-094885 cp multinode-094885-m02:/home/docker/cp-test.txt                       | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | multinode-094885-m03:/home/docker/cp-test_multinode-094885-m02_multinode-094885-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-094885 ssh -n                                                                 | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | multinode-094885-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-094885 ssh -n multinode-094885-m03 sudo cat                                   | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | /home/docker/cp-test_multinode-094885-m02_multinode-094885-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-094885 cp testdata/cp-test.txt                                                | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | multinode-094885-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-094885 ssh -n                                                                 | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | multinode-094885-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-094885 cp multinode-094885-m03:/home/docker/cp-test.txt                       | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4009504673/001/cp-test_multinode-094885-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-094885 ssh -n                                                                 | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | multinode-094885-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-094885 cp multinode-094885-m03:/home/docker/cp-test.txt                       | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | multinode-094885:/home/docker/cp-test_multinode-094885-m03_multinode-094885.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-094885 ssh -n                                                                 | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | multinode-094885-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-094885 ssh -n multinode-094885 sudo cat                                       | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | /home/docker/cp-test_multinode-094885-m03_multinode-094885.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-094885 cp multinode-094885-m03:/home/docker/cp-test.txt                       | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | multinode-094885-m02:/home/docker/cp-test_multinode-094885-m03_multinode-094885-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-094885 ssh -n                                                                 | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | multinode-094885-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-094885 ssh -n multinode-094885-m02 sudo cat                                   | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | /home/docker/cp-test_multinode-094885-m03_multinode-094885-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-094885 node stop m03                                                          | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	| node    | multinode-094885 node start                                                             | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-094885                                                                | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC |                     |
	| stop    | -p multinode-094885                                                                     | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC |                     |
	| start   | -p multinode-094885                                                                     | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:21 UTC | 31 Jul 24 20:25 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-094885                                                                | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:25 UTC |                     |
	| node    | multinode-094885 node delete                                                            | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:25 UTC | 31 Jul 24 20:25 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-094885 stop                                                                   | multinode-094885 | jenkins | v1.33.1 | 31 Jul 24 20:25 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 20:21:49
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 20:21:49.814033  158660 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:21:49.814280  158660 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:21:49.814289  158660 out.go:304] Setting ErrFile to fd 2...
	I0731 20:21:49.814293  158660 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:21:49.814488  158660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 20:21:49.815055  158660 out.go:298] Setting JSON to false
	I0731 20:21:49.815994  158660 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":7446,"bootTime":1722449864,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 20:21:49.816066  158660 start.go:139] virtualization: kvm guest
	I0731 20:21:49.818471  158660 out.go:177] * [multinode-094885] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 20:21:49.820045  158660 notify.go:220] Checking for updates...
	I0731 20:21:49.820053  158660 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 20:21:49.821356  158660 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 20:21:49.822690  158660 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 20:21:49.823849  158660 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 20:21:49.825020  158660 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 20:21:49.826191  158660 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 20:21:49.827891  158660 config.go:182] Loaded profile config "multinode-094885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:21:49.827974  158660 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 20:21:49.828361  158660 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:21:49.828418  158660 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:21:49.843387  158660 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
	I0731 20:21:49.843798  158660 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:21:49.844453  158660 main.go:141] libmachine: Using API Version  1
	I0731 20:21:49.844482  158660 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:21:49.844822  158660 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:21:49.845021  158660 main.go:141] libmachine: (multinode-094885) Calling .DriverName
	I0731 20:21:49.880438  158660 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 20:21:49.881682  158660 start.go:297] selected driver: kvm2
	I0731 20:21:49.881696  158660 start.go:901] validating driver "kvm2" against &{Name:multinode-094885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-094885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.53 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:21:49.881849  158660 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 20:21:49.882163  158660 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:21:49.882231  158660 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-121704/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 20:21:49.897771  158660 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 20:21:49.898448  158660 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 20:21:49.898476  158660 cni.go:84] Creating CNI manager for ""
	I0731 20:21:49.898484  158660 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0731 20:21:49.898549  158660 start.go:340] cluster config:
	{Name:multinode-094885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-094885 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.53 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:21:49.898701  158660 iso.go:125] acquiring lock: {Name:mk69fc0fe37180b19ce91307b5e12ab2d0bd69fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:21:49.901238  158660 out.go:177] * Starting "multinode-094885" primary control-plane node in "multinode-094885" cluster
	I0731 20:21:49.902506  158660 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:21:49.902536  158660 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 20:21:49.902543  158660 cache.go:56] Caching tarball of preloaded images
	I0731 20:21:49.902639  158660 preload.go:172] Found /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 20:21:49.902651  158660 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 20:21:49.902774  158660 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/multinode-094885/config.json ...
	I0731 20:21:49.902963  158660 start.go:360] acquireMachinesLock for multinode-094885: {Name:mk0ee20c9dba367bb5e62f6affdfe6f589095d2a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 20:21:49.903007  158660 start.go:364] duration metric: took 25.586µs to acquireMachinesLock for "multinode-094885"
	I0731 20:21:49.903026  158660 start.go:96] Skipping create...Using existing machine configuration
	I0731 20:21:49.903035  158660 fix.go:54] fixHost starting: 
	I0731 20:21:49.903282  158660 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:21:49.903316  158660 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:21:49.917918  158660 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40049
	I0731 20:21:49.918370  158660 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:21:49.918983  158660 main.go:141] libmachine: Using API Version  1
	I0731 20:21:49.919008  158660 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:21:49.919375  158660 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:21:49.919579  158660 main.go:141] libmachine: (multinode-094885) Calling .DriverName
	I0731 20:21:49.919747  158660 main.go:141] libmachine: (multinode-094885) Calling .GetState
	I0731 20:21:49.921359  158660 fix.go:112] recreateIfNeeded on multinode-094885: state=Running err=<nil>
	W0731 20:21:49.921379  158660 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 20:21:49.923232  158660 out.go:177] * Updating the running kvm2 "multinode-094885" VM ...
	I0731 20:21:49.924449  158660 machine.go:94] provisionDockerMachine start ...
	I0731 20:21:49.924469  158660 main.go:141] libmachine: (multinode-094885) Calling .DriverName
	I0731 20:21:49.924674  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHHostname
	I0731 20:21:49.926903  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:21:49.927345  158660 main.go:141] libmachine: (multinode-094885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:94:53", ip: ""} in network mk-multinode-094885: {Iface:virbr1 ExpiryTime:2024-07-31 21:16:07 +0000 UTC Type:0 Mac:52:54:00:32:94:53 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-094885 Clientid:01:52:54:00:32:94:53}
	I0731 20:21:49.927373  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined IP address 192.168.39.193 and MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:21:49.927538  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHPort
	I0731 20:21:49.927716  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHKeyPath
	I0731 20:21:49.927878  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHKeyPath
	I0731 20:21:49.927991  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHUsername
	I0731 20:21:49.928131  158660 main.go:141] libmachine: Using SSH client type: native
	I0731 20:21:49.928336  158660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I0731 20:21:49.928350  158660 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 20:21:50.038338  158660 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-094885
	
	I0731 20:21:50.038385  158660 main.go:141] libmachine: (multinode-094885) Calling .GetMachineName
	I0731 20:21:50.038742  158660 buildroot.go:166] provisioning hostname "multinode-094885"
	I0731 20:21:50.038772  158660 main.go:141] libmachine: (multinode-094885) Calling .GetMachineName
	I0731 20:21:50.038950  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHHostname
	I0731 20:21:50.041667  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:21:50.042035  158660 main.go:141] libmachine: (multinode-094885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:94:53", ip: ""} in network mk-multinode-094885: {Iface:virbr1 ExpiryTime:2024-07-31 21:16:07 +0000 UTC Type:0 Mac:52:54:00:32:94:53 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-094885 Clientid:01:52:54:00:32:94:53}
	I0731 20:21:50.042056  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined IP address 192.168.39.193 and MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:21:50.042137  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHPort
	I0731 20:21:50.042315  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHKeyPath
	I0731 20:21:50.042482  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHKeyPath
	I0731 20:21:50.042599  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHUsername
	I0731 20:21:50.042769  158660 main.go:141] libmachine: Using SSH client type: native
	I0731 20:21:50.042939  158660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I0731 20:21:50.042954  158660 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-094885 && echo "multinode-094885" | sudo tee /etc/hostname
	I0731 20:21:50.170804  158660 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-094885
	
	I0731 20:21:50.170837  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHHostname
	I0731 20:21:50.173957  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:21:50.174425  158660 main.go:141] libmachine: (multinode-094885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:94:53", ip: ""} in network mk-multinode-094885: {Iface:virbr1 ExpiryTime:2024-07-31 21:16:07 +0000 UTC Type:0 Mac:52:54:00:32:94:53 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-094885 Clientid:01:52:54:00:32:94:53}
	I0731 20:21:50.174455  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined IP address 192.168.39.193 and MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:21:50.174648  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHPort
	I0731 20:21:50.174864  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHKeyPath
	I0731 20:21:50.175045  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHKeyPath
	I0731 20:21:50.175241  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHUsername
	I0731 20:21:50.175449  158660 main.go:141] libmachine: Using SSH client type: native
	I0731 20:21:50.175645  158660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I0731 20:21:50.175672  158660 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-094885' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-094885/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-094885' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:21:50.282597  158660 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:21:50.282627  158660 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 20:21:50.282663  158660 buildroot.go:174] setting up certificates
	I0731 20:21:50.282674  158660 provision.go:84] configureAuth start
	I0731 20:21:50.282686  158660 main.go:141] libmachine: (multinode-094885) Calling .GetMachineName
	I0731 20:21:50.282924  158660 main.go:141] libmachine: (multinode-094885) Calling .GetIP
	I0731 20:21:50.285203  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:21:50.285631  158660 main.go:141] libmachine: (multinode-094885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:94:53", ip: ""} in network mk-multinode-094885: {Iface:virbr1 ExpiryTime:2024-07-31 21:16:07 +0000 UTC Type:0 Mac:52:54:00:32:94:53 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-094885 Clientid:01:52:54:00:32:94:53}
	I0731 20:21:50.285660  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined IP address 192.168.39.193 and MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:21:50.285826  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHHostname
	I0731 20:21:50.288046  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:21:50.288341  158660 main.go:141] libmachine: (multinode-094885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:94:53", ip: ""} in network mk-multinode-094885: {Iface:virbr1 ExpiryTime:2024-07-31 21:16:07 +0000 UTC Type:0 Mac:52:54:00:32:94:53 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-094885 Clientid:01:52:54:00:32:94:53}
	I0731 20:21:50.288365  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined IP address 192.168.39.193 and MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:21:50.288469  158660 provision.go:143] copyHostCerts
	I0731 20:21:50.288500  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 20:21:50.288532  158660 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 20:21:50.288540  158660 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 20:21:50.288608  158660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 20:21:50.288703  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 20:21:50.288717  158660 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 20:21:50.288721  158660 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 20:21:50.288751  158660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 20:21:50.288814  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 20:21:50.288831  158660 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 20:21:50.288835  158660 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 20:21:50.288858  158660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 20:21:50.288915  158660 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.multinode-094885 san=[127.0.0.1 192.168.39.193 localhost minikube multinode-094885]
	I0731 20:21:50.396756  158660 provision.go:177] copyRemoteCerts
	I0731 20:21:50.396818  158660 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:21:50.396843  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHHostname
	I0731 20:21:50.399576  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:21:50.399905  158660 main.go:141] libmachine: (multinode-094885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:94:53", ip: ""} in network mk-multinode-094885: {Iface:virbr1 ExpiryTime:2024-07-31 21:16:07 +0000 UTC Type:0 Mac:52:54:00:32:94:53 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-094885 Clientid:01:52:54:00:32:94:53}
	I0731 20:21:50.399927  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined IP address 192.168.39.193 and MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:21:50.400166  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHPort
	I0731 20:21:50.400366  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHKeyPath
	I0731 20:21:50.400668  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHUsername
	I0731 20:21:50.400786  158660 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/multinode-094885/id_rsa Username:docker}
	I0731 20:21:50.484539  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 20:21:50.484637  158660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:21:50.510614  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 20:21:50.510719  158660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0731 20:21:50.536136  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 20:21:50.536204  158660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 20:21:50.561748  158660 provision.go:87] duration metric: took 279.058934ms to configureAuth
	I0731 20:21:50.561781  158660 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:21:50.562015  158660 config.go:182] Loaded profile config "multinode-094885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:21:50.562088  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHHostname
	I0731 20:21:50.564877  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:21:50.565265  158660 main.go:141] libmachine: (multinode-094885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:94:53", ip: ""} in network mk-multinode-094885: {Iface:virbr1 ExpiryTime:2024-07-31 21:16:07 +0000 UTC Type:0 Mac:52:54:00:32:94:53 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-094885 Clientid:01:52:54:00:32:94:53}
	I0731 20:21:50.565290  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined IP address 192.168.39.193 and MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:21:50.565493  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHPort
	I0731 20:21:50.565716  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHKeyPath
	I0731 20:21:50.565862  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHKeyPath
	I0731 20:21:50.565985  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHUsername
	I0731 20:21:50.566106  158660 main.go:141] libmachine: Using SSH client type: native
	I0731 20:21:50.566373  158660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I0731 20:21:50.566396  158660 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:23:21.444243  158660 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:23:21.444286  158660 machine.go:97] duration metric: took 1m31.519817576s to provisionDockerMachine
	I0731 20:23:21.444301  158660 start.go:293] postStartSetup for "multinode-094885" (driver="kvm2")
	I0731 20:23:21.444317  158660 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:23:21.444337  158660 main.go:141] libmachine: (multinode-094885) Calling .DriverName
	I0731 20:23:21.444741  158660 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:23:21.444780  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHHostname
	I0731 20:23:21.448177  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:23:21.448611  158660 main.go:141] libmachine: (multinode-094885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:94:53", ip: ""} in network mk-multinode-094885: {Iface:virbr1 ExpiryTime:2024-07-31 21:16:07 +0000 UTC Type:0 Mac:52:54:00:32:94:53 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-094885 Clientid:01:52:54:00:32:94:53}
	I0731 20:23:21.448656  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined IP address 192.168.39.193 and MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:23:21.448939  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHPort
	I0731 20:23:21.449156  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHKeyPath
	I0731 20:23:21.449325  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHUsername
	I0731 20:23:21.449493  158660 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/multinode-094885/id_rsa Username:docker}
	I0731 20:23:21.537363  158660 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:23:21.541299  158660 command_runner.go:130] > NAME=Buildroot
	I0731 20:23:21.541316  158660 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0731 20:23:21.541321  158660 command_runner.go:130] > ID=buildroot
	I0731 20:23:21.541327  158660 command_runner.go:130] > VERSION_ID=2023.02.9
	I0731 20:23:21.541348  158660 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0731 20:23:21.541432  158660 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:23:21.541454  158660 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 20:23:21.541525  158660 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 20:23:21.541618  158660 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 20:23:21.541630  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> /etc/ssl/certs/1288912.pem
	I0731 20:23:21.541737  158660 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:23:21.551523  158660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:23:21.576695  158660 start.go:296] duration metric: took 132.364162ms for postStartSetup
	I0731 20:23:21.576751  158660 fix.go:56] duration metric: took 1m31.673715161s for fixHost
	I0731 20:23:21.576790  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHHostname
	I0731 20:23:21.579534  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:23:21.579887  158660 main.go:141] libmachine: (multinode-094885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:94:53", ip: ""} in network mk-multinode-094885: {Iface:virbr1 ExpiryTime:2024-07-31 21:16:07 +0000 UTC Type:0 Mac:52:54:00:32:94:53 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-094885 Clientid:01:52:54:00:32:94:53}
	I0731 20:23:21.579916  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined IP address 192.168.39.193 and MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:23:21.580103  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHPort
	I0731 20:23:21.580325  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHKeyPath
	I0731 20:23:21.580525  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHKeyPath
	I0731 20:23:21.580651  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHUsername
	I0731 20:23:21.580817  158660 main.go:141] libmachine: Using SSH client type: native
	I0731 20:23:21.581030  158660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I0731 20:23:21.581042  158660 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:23:21.686369  158660 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722457401.657112561
	
	I0731 20:23:21.686396  158660 fix.go:216] guest clock: 1722457401.657112561
	I0731 20:23:21.686408  158660 fix.go:229] Guest: 2024-07-31 20:23:21.657112561 +0000 UTC Remote: 2024-07-31 20:23:21.576756777 +0000 UTC m=+91.798841457 (delta=80.355784ms)
	I0731 20:23:21.686444  158660 fix.go:200] guest clock delta is within tolerance: 80.355784ms
	I0731 20:23:21.686454  158660 start.go:83] releasing machines lock for "multinode-094885", held for 1m31.783436589s
	I0731 20:23:21.686477  158660 main.go:141] libmachine: (multinode-094885) Calling .DriverName
	I0731 20:23:21.686759  158660 main.go:141] libmachine: (multinode-094885) Calling .GetIP
	I0731 20:23:21.689632  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:23:21.689969  158660 main.go:141] libmachine: (multinode-094885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:94:53", ip: ""} in network mk-multinode-094885: {Iface:virbr1 ExpiryTime:2024-07-31 21:16:07 +0000 UTC Type:0 Mac:52:54:00:32:94:53 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-094885 Clientid:01:52:54:00:32:94:53}
	I0731 20:23:21.689996  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined IP address 192.168.39.193 and MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:23:21.690156  158660 main.go:141] libmachine: (multinode-094885) Calling .DriverName
	I0731 20:23:21.690684  158660 main.go:141] libmachine: (multinode-094885) Calling .DriverName
	I0731 20:23:21.690868  158660 main.go:141] libmachine: (multinode-094885) Calling .DriverName
	I0731 20:23:21.690928  158660 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:23:21.690982  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHHostname
	I0731 20:23:21.691108  158660 ssh_runner.go:195] Run: cat /version.json
	I0731 20:23:21.691131  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHHostname
	I0731 20:23:21.693639  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:23:21.693708  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:23:21.694046  158660 main.go:141] libmachine: (multinode-094885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:94:53", ip: ""} in network mk-multinode-094885: {Iface:virbr1 ExpiryTime:2024-07-31 21:16:07 +0000 UTC Type:0 Mac:52:54:00:32:94:53 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-094885 Clientid:01:52:54:00:32:94:53}
	I0731 20:23:21.694072  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined IP address 192.168.39.193 and MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:23:21.694109  158660 main.go:141] libmachine: (multinode-094885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:94:53", ip: ""} in network mk-multinode-094885: {Iface:virbr1 ExpiryTime:2024-07-31 21:16:07 +0000 UTC Type:0 Mac:52:54:00:32:94:53 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-094885 Clientid:01:52:54:00:32:94:53}
	I0731 20:23:21.694125  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined IP address 192.168.39.193 and MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:23:21.694192  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHPort
	I0731 20:23:21.694382  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHPort
	I0731 20:23:21.694538  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHKeyPath
	I0731 20:23:21.694611  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHKeyPath
	I0731 20:23:21.694776  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHUsername
	I0731 20:23:21.694798  158660 main.go:141] libmachine: (multinode-094885) Calling .GetSSHUsername
	I0731 20:23:21.694949  158660 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/multinode-094885/id_rsa Username:docker}
	I0731 20:23:21.694978  158660 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/multinode-094885/id_rsa Username:docker}
	I0731 20:23:21.795496  158660 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0731 20:23:21.796250  158660 command_runner.go:130] > {"iso_version": "v1.33.1-1722420371-19355", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "7d72c3be84f92807e8ddb66796778c6727075dd6"}
	I0731 20:23:21.796413  158660 ssh_runner.go:195] Run: systemctl --version
	I0731 20:23:21.802371  158660 command_runner.go:130] > systemd 252 (252)
	I0731 20:23:21.802411  158660 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0731 20:23:21.802699  158660 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:23:21.960519  158660 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 20:23:21.967917  158660 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0731 20:23:21.967968  158660 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:23:21.968052  158660 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:23:21.977955  158660 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0731 20:23:21.977980  158660 start.go:495] detecting cgroup driver to use...
	I0731 20:23:21.978055  158660 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:23:21.995788  158660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:23:22.009808  158660 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:23:22.009870  158660 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:23:22.023908  158660 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:23:22.037488  158660 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:23:22.177446  158660 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:23:22.315615  158660 docker.go:233] disabling docker service ...
	I0731 20:23:22.315718  158660 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:23:22.332851  158660 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:23:22.347146  158660 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:23:22.481959  158660 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:23:22.644466  158660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:23:22.686640  158660 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:23:22.718678  158660 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0731 20:23:22.718730  158660 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 20:23:22.718799  158660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:23:22.732839  158660 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:23:22.732909  158660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:23:22.746446  158660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:23:22.761062  158660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:23:22.771509  158660 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:23:22.782157  158660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:23:22.797807  158660 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:23:22.811808  158660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:23:22.826195  158660 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:23:22.836590  158660 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0731 20:23:22.837108  158660 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:23:22.853689  158660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:23:23.020547  158660 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:23:33.251282  158660 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.230692017s)
	I0731 20:23:33.251317  158660 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:23:33.251418  158660 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:23:33.256512  158660 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0731 20:23:33.256541  158660 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0731 20:23:33.256551  158660 command_runner.go:130] > Device: 0,22	Inode: 1427        Links: 1
	I0731 20:23:33.256561  158660 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 20:23:33.256566  158660 command_runner.go:130] > Access: 2024-07-31 20:23:33.070275458 +0000
	I0731 20:23:33.256572  158660 command_runner.go:130] > Modify: 2024-07-31 20:23:33.070275458 +0000
	I0731 20:23:33.256577  158660 command_runner.go:130] > Change: 2024-07-31 20:23:33.070275458 +0000
	I0731 20:23:33.256581  158660 command_runner.go:130] >  Birth: -
	I0731 20:23:33.256727  158660 start.go:563] Will wait 60s for crictl version
	I0731 20:23:33.256792  158660 ssh_runner.go:195] Run: which crictl
	I0731 20:23:33.260739  158660 command_runner.go:130] > /usr/bin/crictl
	I0731 20:23:33.260814  158660 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:23:33.303435  158660 command_runner.go:130] > Version:  0.1.0
	I0731 20:23:33.303460  158660 command_runner.go:130] > RuntimeName:  cri-o
	I0731 20:23:33.303465  158660 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0731 20:23:33.303470  158660 command_runner.go:130] > RuntimeApiVersion:  v1
	I0731 20:23:33.304737  158660 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:23:33.304809  158660 ssh_runner.go:195] Run: crio --version
	I0731 20:23:33.333865  158660 command_runner.go:130] > crio version 1.29.1
	I0731 20:23:33.333891  158660 command_runner.go:130] > Version:        1.29.1
	I0731 20:23:33.333900  158660 command_runner.go:130] > GitCommit:      unknown
	I0731 20:23:33.333905  158660 command_runner.go:130] > GitCommitDate:  unknown
	I0731 20:23:33.333912  158660 command_runner.go:130] > GitTreeState:   clean
	I0731 20:23:33.333920  158660 command_runner.go:130] > BuildDate:      2024-07-31T15:55:08Z
	I0731 20:23:33.333927  158660 command_runner.go:130] > GoVersion:      go1.21.6
	I0731 20:23:33.333931  158660 command_runner.go:130] > Compiler:       gc
	I0731 20:23:33.333937  158660 command_runner.go:130] > Platform:       linux/amd64
	I0731 20:23:33.333942  158660 command_runner.go:130] > Linkmode:       dynamic
	I0731 20:23:33.333948  158660 command_runner.go:130] > BuildTags:      
	I0731 20:23:33.333954  158660 command_runner.go:130] >   containers_image_ostree_stub
	I0731 20:23:33.333960  158660 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0731 20:23:33.333967  158660 command_runner.go:130] >   btrfs_noversion
	I0731 20:23:33.333975  158660 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0731 20:23:33.333985  158660 command_runner.go:130] >   libdm_no_deferred_remove
	I0731 20:23:33.333992  158660 command_runner.go:130] >   seccomp
	I0731 20:23:33.334000  158660 command_runner.go:130] > LDFlags:          unknown
	I0731 20:23:33.334007  158660 command_runner.go:130] > SeccompEnabled:   true
	I0731 20:23:33.334034  158660 command_runner.go:130] > AppArmorEnabled:  false
	I0731 20:23:33.335129  158660 ssh_runner.go:195] Run: crio --version
	I0731 20:23:33.365542  158660 command_runner.go:130] > crio version 1.29.1
	I0731 20:23:33.365571  158660 command_runner.go:130] > Version:        1.29.1
	I0731 20:23:33.365579  158660 command_runner.go:130] > GitCommit:      unknown
	I0731 20:23:33.365586  158660 command_runner.go:130] > GitCommitDate:  unknown
	I0731 20:23:33.365591  158660 command_runner.go:130] > GitTreeState:   clean
	I0731 20:23:33.365598  158660 command_runner.go:130] > BuildDate:      2024-07-31T15:55:08Z
	I0731 20:23:33.365604  158660 command_runner.go:130] > GoVersion:      go1.21.6
	I0731 20:23:33.365610  158660 command_runner.go:130] > Compiler:       gc
	I0731 20:23:33.365617  158660 command_runner.go:130] > Platform:       linux/amd64
	I0731 20:23:33.365623  158660 command_runner.go:130] > Linkmode:       dynamic
	I0731 20:23:33.365630  158660 command_runner.go:130] > BuildTags:      
	I0731 20:23:33.365642  158660 command_runner.go:130] >   containers_image_ostree_stub
	I0731 20:23:33.365649  158660 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0731 20:23:33.365656  158660 command_runner.go:130] >   btrfs_noversion
	I0731 20:23:33.365661  158660 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0731 20:23:33.365665  158660 command_runner.go:130] >   libdm_no_deferred_remove
	I0731 20:23:33.365669  158660 command_runner.go:130] >   seccomp
	I0731 20:23:33.365673  158660 command_runner.go:130] > LDFlags:          unknown
	I0731 20:23:33.365679  158660 command_runner.go:130] > SeccompEnabled:   true
	I0731 20:23:33.365683  158660 command_runner.go:130] > AppArmorEnabled:  false
	I0731 20:23:33.367890  158660 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 20:23:33.369393  158660 main.go:141] libmachine: (multinode-094885) Calling .GetIP
	I0731 20:23:33.372195  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:23:33.372591  158660 main.go:141] libmachine: (multinode-094885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:94:53", ip: ""} in network mk-multinode-094885: {Iface:virbr1 ExpiryTime:2024-07-31 21:16:07 +0000 UTC Type:0 Mac:52:54:00:32:94:53 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-094885 Clientid:01:52:54:00:32:94:53}
	I0731 20:23:33.372610  158660 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined IP address 192.168.39.193 and MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:23:33.372884  158660 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 20:23:33.377361  158660 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0731 20:23:33.377556  158660 kubeadm.go:883] updating cluster {Name:multinode-094885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-094885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.53 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:23:33.377733  158660 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:23:33.377798  158660 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:23:33.419994  158660 command_runner.go:130] > {
	I0731 20:23:33.420015  158660 command_runner.go:130] >   "images": [
	I0731 20:23:33.420021  158660 command_runner.go:130] >     {
	I0731 20:23:33.420042  158660 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0731 20:23:33.420049  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.420057  158660 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0731 20:23:33.420062  158660 command_runner.go:130] >       ],
	I0731 20:23:33.420067  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.420080  158660 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0731 20:23:33.420095  158660 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0731 20:23:33.420103  158660 command_runner.go:130] >       ],
	I0731 20:23:33.420111  158660 command_runner.go:130] >       "size": "87165492",
	I0731 20:23:33.420118  158660 command_runner.go:130] >       "uid": null,
	I0731 20:23:33.420125  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.420136  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.420143  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.420149  158660 command_runner.go:130] >     },
	I0731 20:23:33.420155  158660 command_runner.go:130] >     {
	I0731 20:23:33.420168  158660 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0731 20:23:33.420175  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.420184  158660 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0731 20:23:33.420191  158660 command_runner.go:130] >       ],
	I0731 20:23:33.420200  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.420212  158660 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0731 20:23:33.420230  158660 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0731 20:23:33.420238  158660 command_runner.go:130] >       ],
	I0731 20:23:33.420245  158660 command_runner.go:130] >       "size": "87174707",
	I0731 20:23:33.420255  158660 command_runner.go:130] >       "uid": null,
	I0731 20:23:33.420269  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.420278  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.420286  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.420294  158660 command_runner.go:130] >     },
	I0731 20:23:33.420301  158660 command_runner.go:130] >     {
	I0731 20:23:33.420315  158660 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0731 20:23:33.420324  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.420333  158660 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0731 20:23:33.420341  158660 command_runner.go:130] >       ],
	I0731 20:23:33.420349  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.420364  158660 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0731 20:23:33.420386  158660 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0731 20:23:33.420481  158660 command_runner.go:130] >       ],
	I0731 20:23:33.420505  158660 command_runner.go:130] >       "size": "1363676",
	I0731 20:23:33.420511  158660 command_runner.go:130] >       "uid": null,
	I0731 20:23:33.420519  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.420529  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.420538  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.420546  158660 command_runner.go:130] >     },
	I0731 20:23:33.420552  158660 command_runner.go:130] >     {
	I0731 20:23:33.420564  158660 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0731 20:23:33.420574  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.420586  158660 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0731 20:23:33.420594  158660 command_runner.go:130] >       ],
	I0731 20:23:33.420602  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.420619  158660 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0731 20:23:33.420667  158660 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0731 20:23:33.420676  158660 command_runner.go:130] >       ],
	I0731 20:23:33.420684  158660 command_runner.go:130] >       "size": "31470524",
	I0731 20:23:33.420691  158660 command_runner.go:130] >       "uid": null,
	I0731 20:23:33.420699  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.420708  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.420716  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.420725  158660 command_runner.go:130] >     },
	I0731 20:23:33.420732  158660 command_runner.go:130] >     {
	I0731 20:23:33.420745  158660 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0731 20:23:33.420754  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.420763  158660 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0731 20:23:33.420771  158660 command_runner.go:130] >       ],
	I0731 20:23:33.420778  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.420793  158660 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0731 20:23:33.420809  158660 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0731 20:23:33.420818  158660 command_runner.go:130] >       ],
	I0731 20:23:33.420826  158660 command_runner.go:130] >       "size": "61245718",
	I0731 20:23:33.420836  158660 command_runner.go:130] >       "uid": null,
	I0731 20:23:33.420847  158660 command_runner.go:130] >       "username": "nonroot",
	I0731 20:23:33.420856  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.420870  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.420878  158660 command_runner.go:130] >     },
	I0731 20:23:33.420894  158660 command_runner.go:130] >     {
	I0731 20:23:33.420905  158660 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0731 20:23:33.420917  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.420928  158660 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0731 20:23:33.420938  158660 command_runner.go:130] >       ],
	I0731 20:23:33.420945  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.420960  158660 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0731 20:23:33.420975  158660 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0731 20:23:33.420984  158660 command_runner.go:130] >       ],
	I0731 20:23:33.420993  158660 command_runner.go:130] >       "size": "150779692",
	I0731 20:23:33.421002  158660 command_runner.go:130] >       "uid": {
	I0731 20:23:33.421009  158660 command_runner.go:130] >         "value": "0"
	I0731 20:23:33.421017  158660 command_runner.go:130] >       },
	I0731 20:23:33.421024  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.421033  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.421041  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.421048  158660 command_runner.go:130] >     },
	I0731 20:23:33.421056  158660 command_runner.go:130] >     {
	I0731 20:23:33.421068  158660 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0731 20:23:33.421077  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.421088  158660 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0731 20:23:33.421096  158660 command_runner.go:130] >       ],
	I0731 20:23:33.421104  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.421117  158660 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0731 20:23:33.421132  158660 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0731 20:23:33.421141  158660 command_runner.go:130] >       ],
	I0731 20:23:33.421152  158660 command_runner.go:130] >       "size": "117609954",
	I0731 20:23:33.421160  158660 command_runner.go:130] >       "uid": {
	I0731 20:23:33.421168  158660 command_runner.go:130] >         "value": "0"
	I0731 20:23:33.421176  158660 command_runner.go:130] >       },
	I0731 20:23:33.421183  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.421192  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.421200  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.421209  158660 command_runner.go:130] >     },
	I0731 20:23:33.421222  158660 command_runner.go:130] >     {
	I0731 20:23:33.421236  158660 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0731 20:23:33.421245  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.421255  158660 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0731 20:23:33.421263  158660 command_runner.go:130] >       ],
	I0731 20:23:33.421270  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.421299  158660 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0731 20:23:33.421315  158660 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0731 20:23:33.421323  158660 command_runner.go:130] >       ],
	I0731 20:23:33.421332  158660 command_runner.go:130] >       "size": "112198984",
	I0731 20:23:33.421352  158660 command_runner.go:130] >       "uid": {
	I0731 20:23:33.421360  158660 command_runner.go:130] >         "value": "0"
	I0731 20:23:33.421365  158660 command_runner.go:130] >       },
	I0731 20:23:33.421370  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.421376  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.421383  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.421389  158660 command_runner.go:130] >     },
	I0731 20:23:33.421395  158660 command_runner.go:130] >     {
	I0731 20:23:33.421404  158660 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0731 20:23:33.421412  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.421420  158660 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0731 20:23:33.421426  158660 command_runner.go:130] >       ],
	I0731 20:23:33.421436  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.421449  158660 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0731 20:23:33.421463  158660 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0731 20:23:33.421472  158660 command_runner.go:130] >       ],
	I0731 20:23:33.421481  158660 command_runner.go:130] >       "size": "85953945",
	I0731 20:23:33.421490  158660 command_runner.go:130] >       "uid": null,
	I0731 20:23:33.421499  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.421509  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.421517  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.421523  158660 command_runner.go:130] >     },
	I0731 20:23:33.421528  158660 command_runner.go:130] >     {
	I0731 20:23:33.421540  158660 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0731 20:23:33.421549  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.421561  158660 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0731 20:23:33.421571  158660 command_runner.go:130] >       ],
	I0731 20:23:33.421580  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.421595  158660 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0731 20:23:33.421610  158660 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0731 20:23:33.421618  158660 command_runner.go:130] >       ],
	I0731 20:23:33.421626  158660 command_runner.go:130] >       "size": "63051080",
	I0731 20:23:33.421635  158660 command_runner.go:130] >       "uid": {
	I0731 20:23:33.421647  158660 command_runner.go:130] >         "value": "0"
	I0731 20:23:33.421655  158660 command_runner.go:130] >       },
	I0731 20:23:33.421663  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.421672  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.421681  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.421690  158660 command_runner.go:130] >     },
	I0731 20:23:33.421697  158660 command_runner.go:130] >     {
	I0731 20:23:33.421710  158660 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0731 20:23:33.421720  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.421731  158660 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0731 20:23:33.421740  158660 command_runner.go:130] >       ],
	I0731 20:23:33.421748  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.421762  158660 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0731 20:23:33.421777  158660 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0731 20:23:33.421786  158660 command_runner.go:130] >       ],
	I0731 20:23:33.421794  158660 command_runner.go:130] >       "size": "750414",
	I0731 20:23:33.421802  158660 command_runner.go:130] >       "uid": {
	I0731 20:23:33.421810  158660 command_runner.go:130] >         "value": "65535"
	I0731 20:23:33.421818  158660 command_runner.go:130] >       },
	I0731 20:23:33.421826  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.421834  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.421842  158660 command_runner.go:130] >       "pinned": true
	I0731 20:23:33.421850  158660 command_runner.go:130] >     }
	I0731 20:23:33.421856  158660 command_runner.go:130] >   ]
	I0731 20:23:33.421862  158660 command_runner.go:130] > }
	I0731 20:23:33.422054  158660 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 20:23:33.422067  158660 crio.go:433] Images already preloaded, skipping extraction
	I0731 20:23:33.422127  158660 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:23:33.457411  158660 command_runner.go:130] > {
	I0731 20:23:33.457437  158660 command_runner.go:130] >   "images": [
	I0731 20:23:33.457443  158660 command_runner.go:130] >     {
	I0731 20:23:33.457454  158660 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0731 20:23:33.457461  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.457469  158660 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0731 20:23:33.457474  158660 command_runner.go:130] >       ],
	I0731 20:23:33.457478  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.457489  158660 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0731 20:23:33.457499  158660 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0731 20:23:33.457505  158660 command_runner.go:130] >       ],
	I0731 20:23:33.457512  158660 command_runner.go:130] >       "size": "87165492",
	I0731 20:23:33.457519  158660 command_runner.go:130] >       "uid": null,
	I0731 20:23:33.457526  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.457548  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.457558  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.457564  158660 command_runner.go:130] >     },
	I0731 20:23:33.457569  158660 command_runner.go:130] >     {
	I0731 20:23:33.457578  158660 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0731 20:23:33.457586  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.457596  158660 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0731 20:23:33.457604  158660 command_runner.go:130] >       ],
	I0731 20:23:33.457611  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.457623  158660 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0731 20:23:33.457637  158660 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0731 20:23:33.457654  158660 command_runner.go:130] >       ],
	I0731 20:23:33.457663  158660 command_runner.go:130] >       "size": "87174707",
	I0731 20:23:33.457670  158660 command_runner.go:130] >       "uid": null,
	I0731 20:23:33.457684  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.457693  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.457701  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.457709  158660 command_runner.go:130] >     },
	I0731 20:23:33.457716  158660 command_runner.go:130] >     {
	I0731 20:23:33.457730  158660 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0731 20:23:33.457740  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.457749  158660 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0731 20:23:33.457758  158660 command_runner.go:130] >       ],
	I0731 20:23:33.457765  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.457780  158660 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0731 20:23:33.457796  158660 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0731 20:23:33.457804  158660 command_runner.go:130] >       ],
	I0731 20:23:33.457811  158660 command_runner.go:130] >       "size": "1363676",
	I0731 20:23:33.457820  158660 command_runner.go:130] >       "uid": null,
	I0731 20:23:33.457827  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.457836  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.457844  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.457853  158660 command_runner.go:130] >     },
	I0731 20:23:33.457861  158660 command_runner.go:130] >     {
	I0731 20:23:33.457872  158660 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0731 20:23:33.457881  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.457894  158660 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0731 20:23:33.457903  158660 command_runner.go:130] >       ],
	I0731 20:23:33.457910  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.457926  158660 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0731 20:23:33.457946  158660 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0731 20:23:33.457954  158660 command_runner.go:130] >       ],
	I0731 20:23:33.457962  158660 command_runner.go:130] >       "size": "31470524",
	I0731 20:23:33.457971  158660 command_runner.go:130] >       "uid": null,
	I0731 20:23:33.457980  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.457987  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.457998  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.458008  158660 command_runner.go:130] >     },
	I0731 20:23:33.458015  158660 command_runner.go:130] >     {
	I0731 20:23:33.458026  158660 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0731 20:23:33.458034  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.458044  158660 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0731 20:23:33.458052  158660 command_runner.go:130] >       ],
	I0731 20:23:33.458060  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.458075  158660 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0731 20:23:33.458090  158660 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0731 20:23:33.458098  158660 command_runner.go:130] >       ],
	I0731 20:23:33.458106  158660 command_runner.go:130] >       "size": "61245718",
	I0731 20:23:33.458116  158660 command_runner.go:130] >       "uid": null,
	I0731 20:23:33.458124  158660 command_runner.go:130] >       "username": "nonroot",
	I0731 20:23:33.458133  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.458141  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.458149  158660 command_runner.go:130] >     },
	I0731 20:23:33.458155  158660 command_runner.go:130] >     {
	I0731 20:23:33.458168  158660 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0731 20:23:33.458177  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.458187  158660 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0731 20:23:33.458195  158660 command_runner.go:130] >       ],
	I0731 20:23:33.458202  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.458215  158660 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0731 20:23:33.458229  158660 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0731 20:23:33.458237  158660 command_runner.go:130] >       ],
	I0731 20:23:33.458246  158660 command_runner.go:130] >       "size": "150779692",
	I0731 20:23:33.458256  158660 command_runner.go:130] >       "uid": {
	I0731 20:23:33.458264  158660 command_runner.go:130] >         "value": "0"
	I0731 20:23:33.458271  158660 command_runner.go:130] >       },
	I0731 20:23:33.458281  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.458290  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.458298  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.458304  158660 command_runner.go:130] >     },
	I0731 20:23:33.458314  158660 command_runner.go:130] >     {
	I0731 20:23:33.458328  158660 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0731 20:23:33.458338  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.458347  158660 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0731 20:23:33.458355  158660 command_runner.go:130] >       ],
	I0731 20:23:33.458364  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.458379  158660 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0731 20:23:33.458394  158660 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0731 20:23:33.458403  158660 command_runner.go:130] >       ],
	I0731 20:23:33.458410  158660 command_runner.go:130] >       "size": "117609954",
	I0731 20:23:33.458420  158660 command_runner.go:130] >       "uid": {
	I0731 20:23:33.458429  158660 command_runner.go:130] >         "value": "0"
	I0731 20:23:33.458437  158660 command_runner.go:130] >       },
	I0731 20:23:33.458444  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.458453  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.458461  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.458469  158660 command_runner.go:130] >     },
	I0731 20:23:33.458476  158660 command_runner.go:130] >     {
	I0731 20:23:33.458488  158660 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0731 20:23:33.458497  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.458506  158660 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0731 20:23:33.458514  158660 command_runner.go:130] >       ],
	I0731 20:23:33.458522  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.458546  158660 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0731 20:23:33.458560  158660 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0731 20:23:33.458566  158660 command_runner.go:130] >       ],
	I0731 20:23:33.458573  158660 command_runner.go:130] >       "size": "112198984",
	I0731 20:23:33.458584  158660 command_runner.go:130] >       "uid": {
	I0731 20:23:33.458594  158660 command_runner.go:130] >         "value": "0"
	I0731 20:23:33.458600  158660 command_runner.go:130] >       },
	I0731 20:23:33.458616  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.458627  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.458633  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.458639  158660 command_runner.go:130] >     },
	I0731 20:23:33.458648  158660 command_runner.go:130] >     {
	I0731 20:23:33.458655  158660 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0731 20:23:33.458662  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.458667  158660 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0731 20:23:33.458670  158660 command_runner.go:130] >       ],
	I0731 20:23:33.458675  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.458684  158660 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0731 20:23:33.458698  158660 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0731 20:23:33.458708  158660 command_runner.go:130] >       ],
	I0731 20:23:33.458715  158660 command_runner.go:130] >       "size": "85953945",
	I0731 20:23:33.458724  158660 command_runner.go:130] >       "uid": null,
	I0731 20:23:33.458733  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.458742  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.458752  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.458761  158660 command_runner.go:130] >     },
	I0731 20:23:33.458768  158660 command_runner.go:130] >     {
	I0731 20:23:33.458775  158660 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0731 20:23:33.458781  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.458786  158660 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0731 20:23:33.458795  158660 command_runner.go:130] >       ],
	I0731 20:23:33.458805  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.458820  158660 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0731 20:23:33.458834  158660 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0731 20:23:33.458842  158660 command_runner.go:130] >       ],
	I0731 20:23:33.458852  158660 command_runner.go:130] >       "size": "63051080",
	I0731 20:23:33.458861  158660 command_runner.go:130] >       "uid": {
	I0731 20:23:33.458868  158660 command_runner.go:130] >         "value": "0"
	I0731 20:23:33.458872  158660 command_runner.go:130] >       },
	I0731 20:23:33.458879  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.458889  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.458899  158660 command_runner.go:130] >       "pinned": false
	I0731 20:23:33.458908  158660 command_runner.go:130] >     },
	I0731 20:23:33.458916  158660 command_runner.go:130] >     {
	I0731 20:23:33.458926  158660 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0731 20:23:33.458936  158660 command_runner.go:130] >       "repoTags": [
	I0731 20:23:33.458946  158660 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0731 20:23:33.458952  158660 command_runner.go:130] >       ],
	I0731 20:23:33.458957  158660 command_runner.go:130] >       "repoDigests": [
	I0731 20:23:33.458970  158660 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0731 20:23:33.458984  158660 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0731 20:23:33.458993  158660 command_runner.go:130] >       ],
	I0731 20:23:33.459002  158660 command_runner.go:130] >       "size": "750414",
	I0731 20:23:33.459011  158660 command_runner.go:130] >       "uid": {
	I0731 20:23:33.459021  158660 command_runner.go:130] >         "value": "65535"
	I0731 20:23:33.459028  158660 command_runner.go:130] >       },
	I0731 20:23:33.459035  158660 command_runner.go:130] >       "username": "",
	I0731 20:23:33.459044  158660 command_runner.go:130] >       "spec": null,
	I0731 20:23:33.459051  158660 command_runner.go:130] >       "pinned": true
	I0731 20:23:33.459056  158660 command_runner.go:130] >     }
	I0731 20:23:33.459064  158660 command_runner.go:130] >   ]
	I0731 20:23:33.459072  158660 command_runner.go:130] > }
	I0731 20:23:33.459250  158660 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 20:23:33.459264  158660 cache_images.go:84] Images are preloaded, skipping loading
	I0731 20:23:33.459273  158660 kubeadm.go:934] updating node { 192.168.39.193 8443 v1.30.3 crio true true} ...
	I0731 20:23:33.459497  158660 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-094885 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.193
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-094885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:23:33.459591  158660 ssh_runner.go:195] Run: crio config
	I0731 20:23:33.498850  158660 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0731 20:23:33.498882  158660 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0731 20:23:33.498892  158660 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0731 20:23:33.498896  158660 command_runner.go:130] > #
	I0731 20:23:33.498906  158660 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0731 20:23:33.498915  158660 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0731 20:23:33.498922  158660 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0731 20:23:33.498932  158660 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0731 20:23:33.498938  158660 command_runner.go:130] > # reload'.
	I0731 20:23:33.498950  158660 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0731 20:23:33.498977  158660 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0731 20:23:33.498990  158660 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0731 20:23:33.498999  158660 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0731 20:23:33.499006  158660 command_runner.go:130] > [crio]
	I0731 20:23:33.499016  158660 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0731 20:23:33.499024  158660 command_runner.go:130] > # containers images, in this directory.
	I0731 20:23:33.499032  158660 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0731 20:23:33.499047  158660 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0731 20:23:33.499058  158660 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0731 20:23:33.499070  158660 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0731 20:23:33.499079  158660 command_runner.go:130] > # imagestore = ""
	I0731 20:23:33.499090  158660 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0731 20:23:33.499103  158660 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0731 20:23:33.499110  158660 command_runner.go:130] > storage_driver = "overlay"
	I0731 20:23:33.499121  158660 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0731 20:23:33.499133  158660 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0731 20:23:33.499139  158660 command_runner.go:130] > storage_option = [
	I0731 20:23:33.499148  158660 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0731 20:23:33.499155  158660 command_runner.go:130] > ]
	I0731 20:23:33.499166  158660 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0731 20:23:33.499180  158660 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0731 20:23:33.499190  158660 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0731 20:23:33.499203  158660 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0731 20:23:33.499216  158660 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0731 20:23:33.499226  158660 command_runner.go:130] > # always happen on a node reboot
	I0731 20:23:33.499238  158660 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0731 20:23:33.499254  158660 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0731 20:23:33.499267  158660 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0731 20:23:33.499278  158660 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0731 20:23:33.499290  158660 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0731 20:23:33.499306  158660 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0731 20:23:33.499322  158660 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0731 20:23:33.499336  158660 command_runner.go:130] > # internal_wipe = true
	I0731 20:23:33.499352  158660 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0731 20:23:33.499363  158660 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0731 20:23:33.499394  158660 command_runner.go:130] > # internal_repair = false
	I0731 20:23:33.499405  158660 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0731 20:23:33.499416  158660 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0731 20:23:33.499426  158660 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0731 20:23:33.499438  158660 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0731 20:23:33.499452  158660 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0731 20:23:33.499460  158660 command_runner.go:130] > [crio.api]
	I0731 20:23:33.499470  158660 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0731 20:23:33.499480  158660 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0731 20:23:33.499493  158660 command_runner.go:130] > # IP address on which the stream server will listen.
	I0731 20:23:33.499503  158660 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0731 20:23:33.499517  158660 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0731 20:23:33.499528  158660 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0731 20:23:33.499537  158660 command_runner.go:130] > # stream_port = "0"
	I0731 20:23:33.499548  158660 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0731 20:23:33.499558  158660 command_runner.go:130] > # stream_enable_tls = false
	I0731 20:23:33.499571  158660 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0731 20:23:33.499580  158660 command_runner.go:130] > # stream_idle_timeout = ""
	I0731 20:23:33.499593  158660 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0731 20:23:33.499605  158660 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0731 20:23:33.499611  158660 command_runner.go:130] > # minutes.
	I0731 20:23:33.499620  158660 command_runner.go:130] > # stream_tls_cert = ""
	I0731 20:23:33.499630  158660 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0731 20:23:33.499641  158660 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0731 20:23:33.499650  158660 command_runner.go:130] > # stream_tls_key = ""
	I0731 20:23:33.499660  158660 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0731 20:23:33.499674  158660 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0731 20:23:33.499695  158660 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0731 20:23:33.499704  158660 command_runner.go:130] > # stream_tls_ca = ""
	I0731 20:23:33.499717  158660 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0731 20:23:33.499727  158660 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0731 20:23:33.499742  158660 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0731 20:23:33.499753  158660 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0731 20:23:33.499765  158660 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0731 20:23:33.499776  158660 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0731 20:23:33.499783  158660 command_runner.go:130] > [crio.runtime]
	I0731 20:23:33.499793  158660 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0731 20:23:33.499802  158660 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0731 20:23:33.499811  158660 command_runner.go:130] > # "nofile=1024:2048"
	I0731 20:23:33.499823  158660 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0731 20:23:33.499830  158660 command_runner.go:130] > # default_ulimits = [
	I0731 20:23:33.499834  158660 command_runner.go:130] > # ]
	I0731 20:23:33.499840  158660 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0731 20:23:33.499847  158660 command_runner.go:130] > # no_pivot = false
	I0731 20:23:33.499852  158660 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0731 20:23:33.499861  158660 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0731 20:23:33.499869  158660 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0731 20:23:33.499881  158660 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0731 20:23:33.499892  158660 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0731 20:23:33.499906  158660 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0731 20:23:33.499915  158660 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0731 20:23:33.499922  158660 command_runner.go:130] > # Cgroup setting for conmon
	I0731 20:23:33.499934  158660 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0731 20:23:33.499940  158660 command_runner.go:130] > conmon_cgroup = "pod"
	I0731 20:23:33.499946  158660 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0731 20:23:33.499954  158660 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0731 20:23:33.499964  158660 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0731 20:23:33.499974  158660 command_runner.go:130] > conmon_env = [
	I0731 20:23:33.499984  158660 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0731 20:23:33.499993  158660 command_runner.go:130] > ]
	I0731 20:23:33.500002  158660 command_runner.go:130] > # Additional environment variables to set for all the
	I0731 20:23:33.500013  158660 command_runner.go:130] > # containers. These are overridden if set in the
	I0731 20:23:33.500024  158660 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0731 20:23:33.500034  158660 command_runner.go:130] > # default_env = [
	I0731 20:23:33.500039  158660 command_runner.go:130] > # ]
	I0731 20:23:33.500046  158660 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0731 20:23:33.500057  158660 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0731 20:23:33.500066  158660 command_runner.go:130] > # selinux = false
	I0731 20:23:33.500076  158660 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0731 20:23:33.500090  158660 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0731 20:23:33.500102  158660 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0731 20:23:33.500111  158660 command_runner.go:130] > # seccomp_profile = ""
	I0731 20:23:33.500120  158660 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0731 20:23:33.500128  158660 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0731 20:23:33.500139  158660 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0731 20:23:33.500149  158660 command_runner.go:130] > # which might increase security.
	I0731 20:23:33.500156  158660 command_runner.go:130] > # This option is currently deprecated,
	I0731 20:23:33.500169  158660 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0731 20:23:33.500179  158660 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0731 20:23:33.500192  158660 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0731 20:23:33.500205  158660 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0731 20:23:33.500216  158660 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0731 20:23:33.500225  158660 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0731 20:23:33.500233  158660 command_runner.go:130] > # This option supports live configuration reload.
	I0731 20:23:33.500246  158660 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0731 20:23:33.500256  158660 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0731 20:23:33.500266  158660 command_runner.go:130] > # the cgroup blockio controller.
	I0731 20:23:33.500275  158660 command_runner.go:130] > # blockio_config_file = ""
	I0731 20:23:33.500287  158660 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0731 20:23:33.500297  158660 command_runner.go:130] > # blockio parameters.
	I0731 20:23:33.500306  158660 command_runner.go:130] > # blockio_reload = false
	I0731 20:23:33.500316  158660 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0731 20:23:33.500323  158660 command_runner.go:130] > # irqbalance daemon.
	I0731 20:23:33.500335  158660 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0731 20:23:33.500347  158660 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0731 20:23:33.500358  158660 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0731 20:23:33.500373  158660 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0731 20:23:33.500390  158660 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0731 20:23:33.500401  158660 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0731 20:23:33.500412  158660 command_runner.go:130] > # This option supports live configuration reload.
	I0731 20:23:33.500419  158660 command_runner.go:130] > # rdt_config_file = ""
	I0731 20:23:33.500426  158660 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0731 20:23:33.500436  158660 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0731 20:23:33.500456  158660 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0731 20:23:33.500466  158660 command_runner.go:130] > # separate_pull_cgroup = ""
	I0731 20:23:33.500476  158660 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0731 20:23:33.500488  158660 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0731 20:23:33.500497  158660 command_runner.go:130] > # will be added.
	I0731 20:23:33.500503  158660 command_runner.go:130] > # default_capabilities = [
	I0731 20:23:33.500510  158660 command_runner.go:130] > # 	"CHOWN",
	I0731 20:23:33.500516  158660 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0731 20:23:33.500525  158660 command_runner.go:130] > # 	"FSETID",
	I0731 20:23:33.500531  158660 command_runner.go:130] > # 	"FOWNER",
	I0731 20:23:33.500536  158660 command_runner.go:130] > # 	"SETGID",
	I0731 20:23:33.500542  158660 command_runner.go:130] > # 	"SETUID",
	I0731 20:23:33.500551  158660 command_runner.go:130] > # 	"SETPCAP",
	I0731 20:23:33.500558  158660 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0731 20:23:33.500566  158660 command_runner.go:130] > # 	"KILL",
	I0731 20:23:33.500572  158660 command_runner.go:130] > # ]
	I0731 20:23:33.500587  158660 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0731 20:23:33.500599  158660 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0731 20:23:33.500607  158660 command_runner.go:130] > # add_inheritable_capabilities = false
	I0731 20:23:33.500615  158660 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0731 20:23:33.500627  158660 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0731 20:23:33.500636  158660 command_runner.go:130] > default_sysctls = [
	I0731 20:23:33.500644  158660 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0731 20:23:33.500652  158660 command_runner.go:130] > ]
	I0731 20:23:33.500660  158660 command_runner.go:130] > # List of devices on the host that a
	I0731 20:23:33.500672  158660 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0731 20:23:33.500681  158660 command_runner.go:130] > # allowed_devices = [
	I0731 20:23:33.500690  158660 command_runner.go:130] > # 	"/dev/fuse",
	I0731 20:23:33.500697  158660 command_runner.go:130] > # ]
	I0731 20:23:33.500703  158660 command_runner.go:130] > # List of additional devices. specified as
	I0731 20:23:33.500716  158660 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0731 20:23:33.500727  158660 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0731 20:23:33.500738  158660 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0731 20:23:33.500747  158660 command_runner.go:130] > # additional_devices = [
	I0731 20:23:33.500752  158660 command_runner.go:130] > # ]
	I0731 20:23:33.500763  158660 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0731 20:23:33.500772  158660 command_runner.go:130] > # cdi_spec_dirs = [
	I0731 20:23:33.500778  158660 command_runner.go:130] > # 	"/etc/cdi",
	I0731 20:23:33.500785  158660 command_runner.go:130] > # 	"/var/run/cdi",
	I0731 20:23:33.500788  158660 command_runner.go:130] > # ]
	I0731 20:23:33.500800  158660 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0731 20:23:33.500814  158660 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0731 20:23:33.500824  158660 command_runner.go:130] > # Defaults to false.
	I0731 20:23:33.500832  158660 command_runner.go:130] > # device_ownership_from_security_context = false
	I0731 20:23:33.500844  158660 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0731 20:23:33.500857  158660 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0731 20:23:33.500865  158660 command_runner.go:130] > # hooks_dir = [
	I0731 20:23:33.500873  158660 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0731 20:23:33.500880  158660 command_runner.go:130] > # ]
	I0731 20:23:33.500890  158660 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0731 20:23:33.500903  158660 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0731 20:23:33.500914  158660 command_runner.go:130] > # its default mounts from the following two files:
	I0731 20:23:33.500922  158660 command_runner.go:130] > #
	I0731 20:23:33.500931  158660 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0731 20:23:33.500972  158660 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0731 20:23:33.500993  158660 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0731 20:23:33.501001  158660 command_runner.go:130] > #
	I0731 20:23:33.501015  158660 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0731 20:23:33.501028  158660 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0731 20:23:33.501041  158660 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0731 20:23:33.501049  158660 command_runner.go:130] > #      only add mounts it finds in this file.
	I0731 20:23:33.501055  158660 command_runner.go:130] > #
	I0731 20:23:33.501064  158660 command_runner.go:130] > # default_mounts_file = ""
	I0731 20:23:33.501072  158660 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0731 20:23:33.501086  158660 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0731 20:23:33.501096  158660 command_runner.go:130] > pids_limit = 1024
	I0731 20:23:33.501108  158660 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0731 20:23:33.501120  158660 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0731 20:23:33.501133  158660 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0731 20:23:33.501148  158660 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0731 20:23:33.501158  158660 command_runner.go:130] > # log_size_max = -1
	I0731 20:23:33.501169  158660 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0731 20:23:33.501179  158660 command_runner.go:130] > # log_to_journald = false
	I0731 20:23:33.501189  158660 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0731 20:23:33.501200  158660 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0731 20:23:33.501209  158660 command_runner.go:130] > # Path to directory for container attach sockets.
	I0731 20:23:33.501214  158660 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0731 20:23:33.501225  158660 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0731 20:23:33.501236  158660 command_runner.go:130] > # bind_mount_prefix = ""
	I0731 20:23:33.501245  158660 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0731 20:23:33.501255  158660 command_runner.go:130] > # read_only = false
	I0731 20:23:33.501265  158660 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0731 20:23:33.501278  158660 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0731 20:23:33.501287  158660 command_runner.go:130] > # live configuration reload.
	I0731 20:23:33.501294  158660 command_runner.go:130] > # log_level = "info"
	I0731 20:23:33.501305  158660 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0731 20:23:33.501312  158660 command_runner.go:130] > # This option supports live configuration reload.
	I0731 20:23:33.501321  158660 command_runner.go:130] > # log_filter = ""
	I0731 20:23:33.501330  158660 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0731 20:23:33.501352  158660 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0731 20:23:33.501359  158660 command_runner.go:130] > # separated by comma.
	I0731 20:23:33.501378  158660 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 20:23:33.501387  158660 command_runner.go:130] > # uid_mappings = ""
	I0731 20:23:33.501396  158660 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0731 20:23:33.501408  158660 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0731 20:23:33.501418  158660 command_runner.go:130] > # separated by comma.
	I0731 20:23:33.501431  158660 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 20:23:33.501439  158660 command_runner.go:130] > # gid_mappings = ""
	I0731 20:23:33.501449  158660 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0731 20:23:33.501461  158660 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0731 20:23:33.501470  158660 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0731 20:23:33.501485  158660 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 20:23:33.501495  158660 command_runner.go:130] > # minimum_mappable_uid = -1
	I0731 20:23:33.501505  158660 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0731 20:23:33.501517  158660 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0731 20:23:33.501530  158660 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0731 20:23:33.501545  158660 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 20:23:33.501554  158660 command_runner.go:130] > # minimum_mappable_gid = -1
	I0731 20:23:33.501563  158660 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0731 20:23:33.501576  158660 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0731 20:23:33.501588  158660 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0731 20:23:33.501596  158660 command_runner.go:130] > # ctr_stop_timeout = 30
	I0731 20:23:33.501606  158660 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0731 20:23:33.501619  158660 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0731 20:23:33.501630  158660 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0731 20:23:33.501640  158660 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0731 20:23:33.501647  158660 command_runner.go:130] > drop_infra_ctr = false
	I0731 20:23:33.501658  158660 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0731 20:23:33.501670  158660 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0731 20:23:33.501684  158660 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0731 20:23:33.501694  158660 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0731 20:23:33.501705  158660 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0731 20:23:33.501718  158660 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0731 20:23:33.501730  158660 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0731 20:23:33.501742  158660 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0731 20:23:33.501751  158660 command_runner.go:130] > # shared_cpuset = ""
	I0731 20:23:33.501761  158660 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0731 20:23:33.501773  158660 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0731 20:23:33.501783  158660 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0731 20:23:33.501794  158660 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0731 20:23:33.501801  158660 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0731 20:23:33.501810  158660 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0731 20:23:33.501823  158660 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0731 20:23:33.501830  158660 command_runner.go:130] > # enable_criu_support = false
	I0731 20:23:33.501842  158660 command_runner.go:130] > # Enable/disable the generation of the container,
	I0731 20:23:33.501855  158660 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0731 20:23:33.501865  158660 command_runner.go:130] > # enable_pod_events = false
	I0731 20:23:33.501892  158660 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0731 20:23:33.501908  158660 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0731 20:23:33.501927  158660 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0731 20:23:33.501934  158660 command_runner.go:130] > # default_runtime = "runc"
	I0731 20:23:33.501940  158660 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0731 20:23:33.501947  158660 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0731 20:23:33.501960  158660 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0731 20:23:33.501966  158660 command_runner.go:130] > # creation as a file is not desired either.
	I0731 20:23:33.501975  158660 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0731 20:23:33.501985  158660 command_runner.go:130] > # the hostname is being managed dynamically.
	I0731 20:23:33.501993  158660 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0731 20:23:33.502001  158660 command_runner.go:130] > # ]
	I0731 20:23:33.502013  158660 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0731 20:23:33.502026  158660 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0731 20:23:33.502036  158660 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0731 20:23:33.502047  158660 command_runner.go:130] > # Each entry in the table should follow the format:
	I0731 20:23:33.502056  158660 command_runner.go:130] > #
	I0731 20:23:33.502063  158660 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0731 20:23:33.502073  158660 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0731 20:23:33.502119  158660 command_runner.go:130] > # runtime_type = "oci"
	I0731 20:23:33.502132  158660 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0731 20:23:33.502140  158660 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0731 20:23:33.502150  158660 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0731 20:23:33.502158  158660 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0731 20:23:33.502171  158660 command_runner.go:130] > # monitor_env = []
	I0731 20:23:33.502183  158660 command_runner.go:130] > # privileged_without_host_devices = false
	I0731 20:23:33.502192  158660 command_runner.go:130] > # allowed_annotations = []
	I0731 20:23:33.502201  158660 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0731 20:23:33.502210  158660 command_runner.go:130] > # Where:
	I0731 20:23:33.502218  158660 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0731 20:23:33.502230  158660 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0731 20:23:33.502243  158660 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0731 20:23:33.502255  158660 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0731 20:23:33.502264  158660 command_runner.go:130] > #   in $PATH.
	I0731 20:23:33.502276  158660 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0731 20:23:33.502286  158660 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0731 20:23:33.502298  158660 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0731 20:23:33.502305  158660 command_runner.go:130] > #   state.
	I0731 20:23:33.502313  158660 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0731 20:23:33.502324  158660 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0731 20:23:33.502334  158660 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0731 20:23:33.502347  158660 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0731 20:23:33.502359  158660 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0731 20:23:33.502372  158660 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0731 20:23:33.502386  158660 command_runner.go:130] > #   The currently recognized values are:
	I0731 20:23:33.502396  158660 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0731 20:23:33.502410  158660 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0731 20:23:33.502421  158660 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0731 20:23:33.502435  158660 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0731 20:23:33.502445  158660 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0731 20:23:33.502457  158660 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0731 20:23:33.502472  158660 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0731 20:23:33.502486  158660 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0731 20:23:33.502500  158660 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0731 20:23:33.502513  158660 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0731 20:23:33.502524  158660 command_runner.go:130] > #   deprecated option "conmon".
	I0731 20:23:33.502537  158660 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0731 20:23:33.502547  158660 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0731 20:23:33.502561  158660 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0731 20:23:33.502575  158660 command_runner.go:130] > #   should be moved to the container's cgroup
	I0731 20:23:33.502591  158660 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0731 20:23:33.502602  158660 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0731 20:23:33.502613  158660 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0731 20:23:33.502625  158660 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0731 20:23:33.502630  158660 command_runner.go:130] > #
	I0731 20:23:33.502638  158660 command_runner.go:130] > # Using the seccomp notifier feature:
	I0731 20:23:33.502646  158660 command_runner.go:130] > #
	I0731 20:23:33.502657  158660 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0731 20:23:33.502671  158660 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0731 20:23:33.502678  158660 command_runner.go:130] > #
	I0731 20:23:33.502688  158660 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0731 20:23:33.502701  158660 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0731 20:23:33.502709  158660 command_runner.go:130] > #
	I0731 20:23:33.502720  158660 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0731 20:23:33.502728  158660 command_runner.go:130] > # feature.
	I0731 20:23:33.502734  158660 command_runner.go:130] > #
	I0731 20:23:33.502747  158660 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0731 20:23:33.502760  158660 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0731 20:23:33.502774  158660 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0731 20:23:33.502786  158660 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0731 20:23:33.502798  158660 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0731 20:23:33.502807  158660 command_runner.go:130] > #
	I0731 20:23:33.502817  158660 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0731 20:23:33.502830  158660 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0731 20:23:33.502840  158660 command_runner.go:130] > #
	I0731 20:23:33.502853  158660 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0731 20:23:33.502866  158660 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0731 20:23:33.502874  158660 command_runner.go:130] > #
	I0731 20:23:33.502884  158660 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0731 20:23:33.502897  158660 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0731 20:23:33.502906  158660 command_runner.go:130] > # limitation.
	I0731 20:23:33.502913  158660 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0731 20:23:33.502924  158660 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0731 20:23:33.502932  158660 command_runner.go:130] > runtime_type = "oci"
	I0731 20:23:33.502940  158660 command_runner.go:130] > runtime_root = "/run/runc"
	I0731 20:23:33.502948  158660 command_runner.go:130] > runtime_config_path = ""
	I0731 20:23:33.502959  158660 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0731 20:23:33.502966  158660 command_runner.go:130] > monitor_cgroup = "pod"
	I0731 20:23:33.502975  158660 command_runner.go:130] > monitor_exec_cgroup = ""
	I0731 20:23:33.502982  158660 command_runner.go:130] > monitor_env = [
	I0731 20:23:33.502995  158660 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0731 20:23:33.503003  158660 command_runner.go:130] > ]
	I0731 20:23:33.503011  158660 command_runner.go:130] > privileged_without_host_devices = false
	I0731 20:23:33.503025  158660 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0731 20:23:33.503036  158660 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0731 20:23:33.503050  158660 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0731 20:23:33.503065  158660 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0731 20:23:33.503081  158660 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0731 20:23:33.503094  158660 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0731 20:23:33.503116  158660 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0731 20:23:33.503132  158660 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0731 20:23:33.503144  158660 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0731 20:23:33.503157  158660 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0731 20:23:33.503161  158660 command_runner.go:130] > # Example:
	I0731 20:23:33.503167  158660 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0731 20:23:33.503178  158660 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0731 20:23:33.503187  158660 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0731 20:23:33.503194  158660 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0731 20:23:33.503200  158660 command_runner.go:130] > # cpuset = 0
	I0731 20:23:33.503207  158660 command_runner.go:130] > # cpushares = "0-1"
	I0731 20:23:33.503214  158660 command_runner.go:130] > # Where:
	I0731 20:23:33.503222  158660 command_runner.go:130] > # The workload name is workload-type.
	I0731 20:23:33.503232  158660 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0731 20:23:33.503240  158660 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0731 20:23:33.503249  158660 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0731 20:23:33.503261  158660 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0731 20:23:33.503271  158660 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0731 20:23:33.503279  158660 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0731 20:23:33.503290  158660 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0731 20:23:33.503297  158660 command_runner.go:130] > # Default value is set to true
	I0731 20:23:33.503305  158660 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0731 20:23:33.503317  158660 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0731 20:23:33.503328  158660 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0731 20:23:33.503339  158660 command_runner.go:130] > # Default value is set to 'false'
	I0731 20:23:33.503349  158660 command_runner.go:130] > # disable_hostport_mapping = false
	I0731 20:23:33.503363  158660 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0731 20:23:33.503372  158660 command_runner.go:130] > #
	I0731 20:23:33.503392  158660 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0731 20:23:33.503407  158660 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0731 20:23:33.503421  158660 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0731 20:23:33.503434  158660 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0731 20:23:33.503446  158660 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0731 20:23:33.503456  158660 command_runner.go:130] > [crio.image]
	I0731 20:23:33.503466  158660 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0731 20:23:33.503476  158660 command_runner.go:130] > # default_transport = "docker://"
	I0731 20:23:33.503489  158660 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0731 20:23:33.503502  158660 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0731 20:23:33.503510  158660 command_runner.go:130] > # global_auth_file = ""
	I0731 20:23:33.503518  158660 command_runner.go:130] > # The image used to instantiate infra containers.
	I0731 20:23:33.503524  158660 command_runner.go:130] > # This option supports live configuration reload.
	I0731 20:23:33.503530  158660 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0731 20:23:33.503536  158660 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0731 20:23:33.503544  158660 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0731 20:23:33.503549  158660 command_runner.go:130] > # This option supports live configuration reload.
	I0731 20:23:33.503556  158660 command_runner.go:130] > # pause_image_auth_file = ""
	I0731 20:23:33.503561  158660 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0731 20:23:33.503570  158660 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0731 20:23:33.503577  158660 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0731 20:23:33.503585  158660 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0731 20:23:33.503589  158660 command_runner.go:130] > # pause_command = "/pause"
	I0731 20:23:33.503596  158660 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0731 20:23:33.503602  158660 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0731 20:23:33.503608  158660 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0731 20:23:33.503613  158660 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0731 20:23:33.503621  158660 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0731 20:23:33.503628  158660 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0731 20:23:33.503634  158660 command_runner.go:130] > # pinned_images = [
	I0731 20:23:33.503637  158660 command_runner.go:130] > # ]
	I0731 20:23:33.503643  158660 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0731 20:23:33.503651  158660 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0731 20:23:33.503657  158660 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0731 20:23:33.503664  158660 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0731 20:23:33.503673  158660 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0731 20:23:33.503682  158660 command_runner.go:130] > # signature_policy = ""
	I0731 20:23:33.503691  158660 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0731 20:23:33.503701  158660 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0731 20:23:33.503707  158660 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0731 20:23:33.503715  158660 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0731 20:23:33.503721  158660 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0731 20:23:33.503728  158660 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0731 20:23:33.503733  158660 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0731 20:23:33.503741  158660 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0731 20:23:33.503747  158660 command_runner.go:130] > # changing them here.
	I0731 20:23:33.503751  158660 command_runner.go:130] > # insecure_registries = [
	I0731 20:23:33.503755  158660 command_runner.go:130] > # ]
	I0731 20:23:33.503762  158660 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0731 20:23:33.503770  158660 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0731 20:23:33.503774  158660 command_runner.go:130] > # image_volumes = "mkdir"
	I0731 20:23:33.503780  158660 command_runner.go:130] > # Temporary directory to use for storing big files
	I0731 20:23:33.503785  158660 command_runner.go:130] > # big_files_temporary_dir = ""
	I0731 20:23:33.503792  158660 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0731 20:23:33.503798  158660 command_runner.go:130] > # CNI plugins.
	I0731 20:23:33.503802  158660 command_runner.go:130] > [crio.network]
	I0731 20:23:33.503810  158660 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0731 20:23:33.503815  158660 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0731 20:23:33.503821  158660 command_runner.go:130] > # cni_default_network = ""
	I0731 20:23:33.503826  158660 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0731 20:23:33.503832  158660 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0731 20:23:33.503838  158660 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0731 20:23:33.503843  158660 command_runner.go:130] > # plugin_dirs = [
	I0731 20:23:33.503847  158660 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0731 20:23:33.503850  158660 command_runner.go:130] > # ]
	I0731 20:23:33.503857  158660 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0731 20:23:33.503861  158660 command_runner.go:130] > [crio.metrics]
	I0731 20:23:33.503865  158660 command_runner.go:130] > # Globally enable or disable metrics support.
	I0731 20:23:33.503872  158660 command_runner.go:130] > enable_metrics = true
	I0731 20:23:33.503876  158660 command_runner.go:130] > # Specify enabled metrics collectors.
	I0731 20:23:33.503883  158660 command_runner.go:130] > # Per default all metrics are enabled.
	I0731 20:23:33.503888  158660 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0731 20:23:33.503896  158660 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0731 20:23:33.503902  158660 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0731 20:23:33.503906  158660 command_runner.go:130] > # metrics_collectors = [
	I0731 20:23:33.503910  158660 command_runner.go:130] > # 	"operations",
	I0731 20:23:33.503916  158660 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0731 20:23:33.503923  158660 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0731 20:23:33.503929  158660 command_runner.go:130] > # 	"operations_errors",
	I0731 20:23:33.503933  158660 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0731 20:23:33.503937  158660 command_runner.go:130] > # 	"image_pulls_by_name",
	I0731 20:23:33.503942  158660 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0731 20:23:33.503946  158660 command_runner.go:130] > # 	"image_pulls_failures",
	I0731 20:23:33.503950  158660 command_runner.go:130] > # 	"image_pulls_successes",
	I0731 20:23:33.503957  158660 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0731 20:23:33.503960  158660 command_runner.go:130] > # 	"image_layer_reuse",
	I0731 20:23:33.503967  158660 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0731 20:23:33.503971  158660 command_runner.go:130] > # 	"containers_oom_total",
	I0731 20:23:33.503975  158660 command_runner.go:130] > # 	"containers_oom",
	I0731 20:23:33.503979  158660 command_runner.go:130] > # 	"processes_defunct",
	I0731 20:23:33.503985  158660 command_runner.go:130] > # 	"operations_total",
	I0731 20:23:33.503990  158660 command_runner.go:130] > # 	"operations_latency_seconds",
	I0731 20:23:33.503997  158660 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0731 20:23:33.504001  158660 command_runner.go:130] > # 	"operations_errors_total",
	I0731 20:23:33.504007  158660 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0731 20:23:33.504012  158660 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0731 20:23:33.504018  158660 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0731 20:23:33.504022  158660 command_runner.go:130] > # 	"image_pulls_success_total",
	I0731 20:23:33.504028  158660 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0731 20:23:33.504032  158660 command_runner.go:130] > # 	"containers_oom_count_total",
	I0731 20:23:33.504039  158660 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0731 20:23:33.504043  158660 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0731 20:23:33.504048  158660 command_runner.go:130] > # ]
	I0731 20:23:33.504053  158660 command_runner.go:130] > # The port on which the metrics server will listen.
	I0731 20:23:33.504059  158660 command_runner.go:130] > # metrics_port = 9090
	I0731 20:23:33.504065  158660 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0731 20:23:33.504071  158660 command_runner.go:130] > # metrics_socket = ""
	I0731 20:23:33.504076  158660 command_runner.go:130] > # The certificate for the secure metrics server.
	I0731 20:23:33.504084  158660 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0731 20:23:33.504091  158660 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0731 20:23:33.504097  158660 command_runner.go:130] > # certificate on any modification event.
	I0731 20:23:33.504102  158660 command_runner.go:130] > # metrics_cert = ""
	I0731 20:23:33.504108  158660 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0731 20:23:33.504113  158660 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0731 20:23:33.504119  158660 command_runner.go:130] > # metrics_key = ""
	I0731 20:23:33.504124  158660 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0731 20:23:33.504130  158660 command_runner.go:130] > [crio.tracing]
	I0731 20:23:33.504135  158660 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0731 20:23:33.504141  158660 command_runner.go:130] > # enable_tracing = false
	I0731 20:23:33.504146  158660 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0731 20:23:33.504153  158660 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0731 20:23:33.504159  158660 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0731 20:23:33.504166  158660 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0731 20:23:33.504170  158660 command_runner.go:130] > # CRI-O NRI configuration.
	I0731 20:23:33.504173  158660 command_runner.go:130] > [crio.nri]
	I0731 20:23:33.504178  158660 command_runner.go:130] > # Globally enable or disable NRI.
	I0731 20:23:33.504183  158660 command_runner.go:130] > # enable_nri = false
	I0731 20:23:33.504189  158660 command_runner.go:130] > # NRI socket to listen on.
	I0731 20:23:33.504195  158660 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0731 20:23:33.504199  158660 command_runner.go:130] > # NRI plugin directory to use.
	I0731 20:23:33.504206  158660 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0731 20:23:33.504210  158660 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0731 20:23:33.504220  158660 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0731 20:23:33.504228  158660 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0731 20:23:33.504232  158660 command_runner.go:130] > # nri_disable_connections = false
	I0731 20:23:33.504239  158660 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0731 20:23:33.504244  158660 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0731 20:23:33.504251  158660 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0731 20:23:33.504255  158660 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0731 20:23:33.504261  158660 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0731 20:23:33.504265  158660 command_runner.go:130] > [crio.stats]
	I0731 20:23:33.504272  158660 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0731 20:23:33.504279  158660 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0731 20:23:33.504283  158660 command_runner.go:130] > # stats_collection_period = 0
	I0731 20:23:33.504926  158660 command_runner.go:130] ! time="2024-07-31 20:23:33.461945345Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0731 20:23:33.504961  158660 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0731 20:23:33.505192  158660 cni.go:84] Creating CNI manager for ""
	I0731 20:23:33.505208  158660 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0731 20:23:33.505219  158660 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:23:33.505246  158660 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.193 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-094885 NodeName:multinode-094885 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.193"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.193 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 20:23:33.505435  158660 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.193
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-094885"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.193
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.193"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:23:33.505516  158660 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 20:23:33.515784  158660 command_runner.go:130] > kubeadm
	I0731 20:23:33.515809  158660 command_runner.go:130] > kubectl
	I0731 20:23:33.515815  158660 command_runner.go:130] > kubelet
	I0731 20:23:33.515841  158660 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:23:33.515887  158660 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 20:23:33.525939  158660 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0731 20:23:33.542866  158660 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:23:33.560206  158660 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0731 20:23:33.577802  158660 ssh_runner.go:195] Run: grep 192.168.39.193	control-plane.minikube.internal$ /etc/hosts
	I0731 20:23:33.581933  158660 command_runner.go:130] > 192.168.39.193	control-plane.minikube.internal
	I0731 20:23:33.582028  158660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:23:33.721738  158660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:23:33.737326  158660 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/multinode-094885 for IP: 192.168.39.193
	I0731 20:23:33.737359  158660 certs.go:194] generating shared ca certs ...
	I0731 20:23:33.737380  158660 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:23:33.737557  158660 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 20:23:33.737598  158660 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 20:23:33.737608  158660 certs.go:256] generating profile certs ...
	I0731 20:23:33.737700  158660 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/multinode-094885/client.key
	I0731 20:23:33.737743  158660 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/multinode-094885/apiserver.key.3eab5c8e
	I0731 20:23:33.737782  158660 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/multinode-094885/proxy-client.key
	I0731 20:23:33.737806  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 20:23:33.737820  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 20:23:33.737831  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 20:23:33.737841  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 20:23:33.737850  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/multinode-094885/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 20:23:33.737863  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/multinode-094885/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 20:23:33.737873  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/multinode-094885/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 20:23:33.737885  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/multinode-094885/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 20:23:33.737935  158660 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 20:23:33.737961  158660 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 20:23:33.737971  158660 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 20:23:33.737990  158660 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:23:33.738015  158660 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:23:33.738036  158660 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 20:23:33.738071  158660 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:23:33.738096  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:23:33.738109  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem -> /usr/share/ca-certificates/128891.pem
	I0731 20:23:33.738121  158660 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> /usr/share/ca-certificates/1288912.pem
	I0731 20:23:33.738662  158660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:23:33.763788  158660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:23:33.788883  158660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:23:33.813822  158660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 20:23:33.838589  158660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/multinode-094885/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 20:23:33.863142  158660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/multinode-094885/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 20:23:33.887890  158660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/multinode-094885/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:23:33.912118  158660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/multinode-094885/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 20:23:33.936282  158660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:23:33.960194  158660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 20:23:33.984346  158660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 20:23:34.008173  158660 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:23:34.025242  158660 ssh_runner.go:195] Run: openssl version
	I0731 20:23:34.031227  158660 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0731 20:23:34.031360  158660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:23:34.042500  158660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:23:34.047217  158660 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:23:34.047258  158660 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:23:34.047304  158660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:23:34.053012  158660 command_runner.go:130] > b5213941
	I0731 20:23:34.053180  158660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:23:34.062920  158660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 20:23:34.073955  158660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 20:23:34.078709  158660 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 20:23:34.078732  158660 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 20:23:34.078779  158660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 20:23:34.084688  158660 command_runner.go:130] > 51391683
	I0731 20:23:34.084755  158660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 20:23:34.094387  158660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 20:23:34.105505  158660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 20:23:34.110522  158660 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 20:23:34.110598  158660 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 20:23:34.110649  158660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 20:23:34.116274  158660 command_runner.go:130] > 3ec20f2e
	I0731 20:23:34.116585  158660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:23:34.126536  158660 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:23:34.131200  158660 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:23:34.131225  158660 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0731 20:23:34.131233  158660 command_runner.go:130] > Device: 253,1	Inode: 533291      Links: 1
	I0731 20:23:34.131243  158660 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 20:23:34.131251  158660 command_runner.go:130] > Access: 2024-07-31 20:16:25.458336209 +0000
	I0731 20:23:34.131258  158660 command_runner.go:130] > Modify: 2024-07-31 20:16:25.458336209 +0000
	I0731 20:23:34.131266  158660 command_runner.go:130] > Change: 2024-07-31 20:16:25.458336209 +0000
	I0731 20:23:34.131273  158660 command_runner.go:130] >  Birth: 2024-07-31 20:16:25.458336209 +0000
	I0731 20:23:34.131382  158660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 20:23:34.137061  158660 command_runner.go:130] > Certificate will not expire
	I0731 20:23:34.137307  158660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 20:23:34.143170  158660 command_runner.go:130] > Certificate will not expire
	I0731 20:23:34.143235  158660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 20:23:34.148758  158660 command_runner.go:130] > Certificate will not expire
	I0731 20:23:34.148825  158660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 20:23:34.154670  158660 command_runner.go:130] > Certificate will not expire
	I0731 20:23:34.154733  158660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 20:23:34.160397  158660 command_runner.go:130] > Certificate will not expire
	I0731 20:23:34.160694  158660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 20:23:34.166284  158660 command_runner.go:130] > Certificate will not expire
	I0731 20:23:34.166591  158660 kubeadm.go:392] StartCluster: {Name:multinode-094885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-094885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.53 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:23:34.166738  158660 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:23:34.166795  158660 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:23:34.202327  158660 command_runner.go:130] > 7ccbc51911f88b2cea53f55b4e9226d72df1a15a63947dccb6900e21b71381fb
	I0731 20:23:34.202363  158660 command_runner.go:130] > aebb97e9d0b5666e5da4442730c50929905272ee9c25c006a4c9e5eda35ef98b
	I0731 20:23:34.202373  158660 command_runner.go:130] > 72e862487909d32a034d59c9ec722d2003e2ea4b858f2736fe642cce09f2c230
	I0731 20:23:34.202384  158660 command_runner.go:130] > 86082e9a17e18abeef414a96a2ce5e84ea762b3b9eae19e1e48e9a8b5d49804a
	I0731 20:23:34.202581  158660 command_runner.go:130] > 4d7a4222195e163501ef8c970cf2272d4d92203c6b85fadf372dc530a5ff2761
	I0731 20:23:34.202624  158660 command_runner.go:130] > 3cb70dfa50e8e788940f8a1dc034720fb6ac7ad4b9ccbc7338f3428637dab8b9
	I0731 20:23:34.202636  158660 command_runner.go:130] > bd55ce3db2a7d9e442823271d3bbfa8562e77c7ae881497975e66ea7e6547a6d
	I0731 20:23:34.202647  158660 command_runner.go:130] > 25141b1279c4b01c415837cb60597cae930af0d465ad070502ff71a3e82b4afb
	I0731 20:23:34.202722  158660 command_runner.go:130] > 1d62542ea5da5222f8b762ce76723d43560cda4cc13ef73726c65608d6ef6521
	I0731 20:23:34.204304  158660 cri.go:89] found id: "7ccbc51911f88b2cea53f55b4e9226d72df1a15a63947dccb6900e21b71381fb"
	I0731 20:23:34.204321  158660 cri.go:89] found id: "aebb97e9d0b5666e5da4442730c50929905272ee9c25c006a4c9e5eda35ef98b"
	I0731 20:23:34.204337  158660 cri.go:89] found id: "72e862487909d32a034d59c9ec722d2003e2ea4b858f2736fe642cce09f2c230"
	I0731 20:23:34.204341  158660 cri.go:89] found id: "86082e9a17e18abeef414a96a2ce5e84ea762b3b9eae19e1e48e9a8b5d49804a"
	I0731 20:23:34.204345  158660 cri.go:89] found id: "4d7a4222195e163501ef8c970cf2272d4d92203c6b85fadf372dc530a5ff2761"
	I0731 20:23:34.204350  158660 cri.go:89] found id: "3cb70dfa50e8e788940f8a1dc034720fb6ac7ad4b9ccbc7338f3428637dab8b9"
	I0731 20:23:34.204354  158660 cri.go:89] found id: "bd55ce3db2a7d9e442823271d3bbfa8562e77c7ae881497975e66ea7e6547a6d"
	I0731 20:23:34.204358  158660 cri.go:89] found id: "25141b1279c4b01c415837cb60597cae930af0d465ad070502ff71a3e82b4afb"
	I0731 20:23:34.204362  158660 cri.go:89] found id: "1d62542ea5da5222f8b762ce76723d43560cda4cc13ef73726c65608d6ef6521"
	I0731 20:23:34.204369  158660 cri.go:89] found id: ""
	I0731 20:23:34.204426  158660 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 31 20:27:41 multinode-094885 crio[2971]: time="2024-07-31 20:27:41.888699564Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722457661888676660,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cfaf5364-72be-4c4c-aed2-cc2fc5860b49 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:27:41 multinode-094885 crio[2971]: time="2024-07-31 20:27:41.889332620Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9f7eab3-0d63-4ef9-a32a-894d673fd93f name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:27:41 multinode-094885 crio[2971]: time="2024-07-31 20:27:41.889383231Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9f7eab3-0d63-4ef9-a32a-894d673fd93f name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:27:41 multinode-094885 crio[2971]: time="2024-07-31 20:27:41.889774722Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:15436fdd715785b635301ef11a649bb91a95d21320efe544557f270eace6df3f,PodSandboxId:1906f71f375b03f83f43c1528754732e1bfab0e9bbbf79523cacefa9ae27f715,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722457455146360216,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwlpt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49fb91bd-1c6c-4dfb-af51-7f1604463b26,},Annotations:map[string]string{io.kubernetes.container.hash: dba96ca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64ad53c3587b5568b79502878d17b766cf54e7c07bcc3fef95758ce5918270c3,PodSandboxId:8a835dbc9646b109d083034ec8434f3d59a55bb87e020ad62206eeaa03be1fca,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722457421633891789,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-glw6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9252257a-3126-4945-8013-bbd3a4c9f820,},Annotations:map[string]string{io.kubernetes.container.hash: 4a7249b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e9d628cf7a5a1a13be57953e39388980cf20ecbb7d664dc6876fb4361aa3c1,PodSandboxId:ee745ccf6b1f15ef5701ee62a3ed93b59dcb2e4d5aeb23a16c40b9ea0cfe93a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722457421587768075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sh4fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34113636-7979-4b54-bf2a-37c49178450d,},Annotations:map[string]string{io.kubernetes.container.hash: 63f8e4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97d9921b6b5b5e2e94ac6d8334ef0a99462ed8bb280be53483e354fc701ed19,PodSandboxId:9f1923274a45e66fdb7dba3d2b0ea4762ccdd3142a59f5b923bc4b6eee444280,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722457421465473809,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 360a1917-a5a8-4093-b355-c774cccc8548,},An
notations:map[string]string{io.kubernetes.container.hash: b389d224,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd55608813a0f0c36d2a388e76d2741aea9db7517c652637946f5d9ad76acd5,PodSandboxId:7775de6c551cfa697c443a1c10393f4987772634e09d1fe63430f301d84e5fc9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722457421417028604,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vcsv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474aef7b-6525-439f-baa8-801e799ea6a7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 699f1ad0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95e1203585db282d87e855e71382c41a4bb300ef267cff506afeb8117170c7b3,PodSandboxId:225fc91b7219deee45bab76b3fb7e7adf461f40fdaa6d410cda4871f1c90fc75,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722457416548041781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff917910c99be5ca87c83a0532756771,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:170e2ce2375b5d347dd27a7f6671e582c5e4f2eb1fa1be6c22012910ce5c5119,PodSandboxId:76048d11be0b39034995b7a3f5beb46372177f5072ed6a70450fdac83707a0da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722457416478564127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed18c908b1631740f056181e183d629b,},Annotations:map[string]string{io.kub
ernetes.container.hash: 3e59ec47,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ccb2a4daa9e2e76efe02e7d9f73767ad460acbd85dacbb0a3beacd058c19f85,PodSandboxId:a680464821f8e0e9fede1b3977123f951d9fb3c5ce53dec2842e5ce3272799ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722457416512645124,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293234958929e5b2f40fcf9fe89f059c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c155ec9b0f0ae3fae47802262dec33f8c36c8fd1727326b616cb03ec5e7c2f83,PodSandboxId:eaf65f3cefe47f366ebebdf1c1b7fd10b0193a8b4de60003ad3f5ed12ec7fcb8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722457416455729519,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b066168dce1fb13b29b0e5215f2e4c17,},Annotations:map[string]string{io.kubernetes.container.hash: 7c0abe50,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ccbc51911f88b2cea53f55b4e9226d72df1a15a63947dccb6900e21b71381fb,PodSandboxId:cb7c00fd54be8529819c7f6fbe71d0abd6baf362ef761c93c3fb16f926ae1a33,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722457402800817088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sh4fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34113636-7979-4b54-bf2a-37c49178450d,},Annotations:map[string]string{io.kubernetes.container.hash: 63f8e4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d04b8842076b61a34fd5b5ecfe3702a29f477a4e4f35542098783b88c33a82ca,PodSandboxId:2659ab64605639979224b37c4547b2024bc05913f45a9a7bb405ec83131ae9af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722457082072782326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwlpt,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49fb91bd-1c6c-4dfb-af51-7f1604463b26,},Annotations:map[string]string{io.kubernetes.container.hash: dba96ca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72e862487909d32a034d59c9ec722d2003e2ea4b858f2736fe642cce09f2c230,PodSandboxId:462cb7067877aab3a2ecfea2172d63ecd9b051871faa6d504453347a15e22619,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722457024342454654,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 360a1917-a5a8-4093-b355-c774cccc8548,},Annotations:map[string]string{io.kubernetes.container.hash: b389d224,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86082e9a17e18abeef414a96a2ce5e84ea762b3b9eae19e1e48e9a8b5d49804a,PodSandboxId:d53634fcc825315ef3f58ad427820c1422931f1314b749c2f36bf1d2a5d16d77,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722457012348726824,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-glw6d,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 9252257a-3126-4945-8013-bbd3a4c9f820,},Annotations:map[string]string{io.kubernetes.container.hash: 4a7249b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d7a4222195e163501ef8c970cf2272d4d92203c6b85fadf372dc530a5ff2761,PodSandboxId:bee3609a504470e74917d47a74616ca3798ef90df0b6d23171ae00239775d808,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722457008537165511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vcsv5,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 474aef7b-6525-439f-baa8-801e799ea6a7,},Annotations:map[string]string{io.kubernetes.container.hash: 699f1ad0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cb70dfa50e8e788940f8a1dc034720fb6ac7ad4b9ccbc7338f3428637dab8b9,PodSandboxId:ddd0df2e6857e8fb0ac2f5fb7b3deb0327e935ebf77ea225542a00732dc05300,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722456989271588300,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b066168dce1fb13b29b0e5215f2e4c
17,},Annotations:map[string]string{io.kubernetes.container.hash: 7c0abe50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25141b1279c4b01c415837cb60597cae930af0d465ad070502ff71a3e82b4afb,PodSandboxId:3027554ccce1d345e3b6c8beb43cbee5573a4675e09e728533c0e6c178a996f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722456989234785043,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293234958929e5b2f40fcf9fe89f059c,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd55ce3db2a7d9e442823271d3bbfa8562e77c7ae881497975e66ea7e6547a6d,PodSandboxId:e2ffd4f3dae22fb8ff47764ca6c9f49bad4926aa0fee04c879f049bd513c68e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722456989237785971,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed18c908b1631740f056181e183d629b,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 3e59ec47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d62542ea5da5222f8b762ce76723d43560cda4cc13ef73726c65608d6ef6521,PodSandboxId:7d22a0fb42285db96ac26c365a427ad06da26d5291b1544cece1e6dc093ab549,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722456989177786195,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff917910c99be5ca87c83a0532756771,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f9f7eab3-0d63-4ef9-a32a-894d673fd93f name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:27:41 multinode-094885 crio[2971]: time="2024-07-31 20:27:41.937601524Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dda6876a-8eee-4f5f-a752-ae6f473f4e39 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:27:41 multinode-094885 crio[2971]: time="2024-07-31 20:27:41.937696183Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dda6876a-8eee-4f5f-a752-ae6f473f4e39 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:27:41 multinode-094885 crio[2971]: time="2024-07-31 20:27:41.939707798Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ea1c1217-ce08-4984-8aef-96978a5b6a75 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:27:41 multinode-094885 crio[2971]: time="2024-07-31 20:27:41.940205595Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722457661940179276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ea1c1217-ce08-4984-8aef-96978a5b6a75 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:27:41 multinode-094885 crio[2971]: time="2024-07-31 20:27:41.941096439Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3362304e-b35a-4319-945c-9f583aeeb9f9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:27:41 multinode-094885 crio[2971]: time="2024-07-31 20:27:41.941214932Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3362304e-b35a-4319-945c-9f583aeeb9f9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:27:41 multinode-094885 crio[2971]: time="2024-07-31 20:27:41.941957090Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:15436fdd715785b635301ef11a649bb91a95d21320efe544557f270eace6df3f,PodSandboxId:1906f71f375b03f83f43c1528754732e1bfab0e9bbbf79523cacefa9ae27f715,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722457455146360216,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwlpt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49fb91bd-1c6c-4dfb-af51-7f1604463b26,},Annotations:map[string]string{io.kubernetes.container.hash: dba96ca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64ad53c3587b5568b79502878d17b766cf54e7c07bcc3fef95758ce5918270c3,PodSandboxId:8a835dbc9646b109d083034ec8434f3d59a55bb87e020ad62206eeaa03be1fca,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722457421633891789,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-glw6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9252257a-3126-4945-8013-bbd3a4c9f820,},Annotations:map[string]string{io.kubernetes.container.hash: 4a7249b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e9d628cf7a5a1a13be57953e39388980cf20ecbb7d664dc6876fb4361aa3c1,PodSandboxId:ee745ccf6b1f15ef5701ee62a3ed93b59dcb2e4d5aeb23a16c40b9ea0cfe93a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722457421587768075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sh4fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34113636-7979-4b54-bf2a-37c49178450d,},Annotations:map[string]string{io.kubernetes.container.hash: 63f8e4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97d9921b6b5b5e2e94ac6d8334ef0a99462ed8bb280be53483e354fc701ed19,PodSandboxId:9f1923274a45e66fdb7dba3d2b0ea4762ccdd3142a59f5b923bc4b6eee444280,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722457421465473809,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 360a1917-a5a8-4093-b355-c774cccc8548,},An
notations:map[string]string{io.kubernetes.container.hash: b389d224,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd55608813a0f0c36d2a388e76d2741aea9db7517c652637946f5d9ad76acd5,PodSandboxId:7775de6c551cfa697c443a1c10393f4987772634e09d1fe63430f301d84e5fc9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722457421417028604,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vcsv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474aef7b-6525-439f-baa8-801e799ea6a7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 699f1ad0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95e1203585db282d87e855e71382c41a4bb300ef267cff506afeb8117170c7b3,PodSandboxId:225fc91b7219deee45bab76b3fb7e7adf461f40fdaa6d410cda4871f1c90fc75,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722457416548041781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff917910c99be5ca87c83a0532756771,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:170e2ce2375b5d347dd27a7f6671e582c5e4f2eb1fa1be6c22012910ce5c5119,PodSandboxId:76048d11be0b39034995b7a3f5beb46372177f5072ed6a70450fdac83707a0da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722457416478564127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed18c908b1631740f056181e183d629b,},Annotations:map[string]string{io.kub
ernetes.container.hash: 3e59ec47,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ccb2a4daa9e2e76efe02e7d9f73767ad460acbd85dacbb0a3beacd058c19f85,PodSandboxId:a680464821f8e0e9fede1b3977123f951d9fb3c5ce53dec2842e5ce3272799ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722457416512645124,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293234958929e5b2f40fcf9fe89f059c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c155ec9b0f0ae3fae47802262dec33f8c36c8fd1727326b616cb03ec5e7c2f83,PodSandboxId:eaf65f3cefe47f366ebebdf1c1b7fd10b0193a8b4de60003ad3f5ed12ec7fcb8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722457416455729519,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b066168dce1fb13b29b0e5215f2e4c17,},Annotations:map[string]string{io.kubernetes.container.hash: 7c0abe50,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ccbc51911f88b2cea53f55b4e9226d72df1a15a63947dccb6900e21b71381fb,PodSandboxId:cb7c00fd54be8529819c7f6fbe71d0abd6baf362ef761c93c3fb16f926ae1a33,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722457402800817088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sh4fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34113636-7979-4b54-bf2a-37c49178450d,},Annotations:map[string]string{io.kubernetes.container.hash: 63f8e4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d04b8842076b61a34fd5b5ecfe3702a29f477a4e4f35542098783b88c33a82ca,PodSandboxId:2659ab64605639979224b37c4547b2024bc05913f45a9a7bb405ec83131ae9af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722457082072782326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwlpt,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49fb91bd-1c6c-4dfb-af51-7f1604463b26,},Annotations:map[string]string{io.kubernetes.container.hash: dba96ca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72e862487909d32a034d59c9ec722d2003e2ea4b858f2736fe642cce09f2c230,PodSandboxId:462cb7067877aab3a2ecfea2172d63ecd9b051871faa6d504453347a15e22619,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722457024342454654,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 360a1917-a5a8-4093-b355-c774cccc8548,},Annotations:map[string]string{io.kubernetes.container.hash: b389d224,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86082e9a17e18abeef414a96a2ce5e84ea762b3b9eae19e1e48e9a8b5d49804a,PodSandboxId:d53634fcc825315ef3f58ad427820c1422931f1314b749c2f36bf1d2a5d16d77,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722457012348726824,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-glw6d,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 9252257a-3126-4945-8013-bbd3a4c9f820,},Annotations:map[string]string{io.kubernetes.container.hash: 4a7249b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d7a4222195e163501ef8c970cf2272d4d92203c6b85fadf372dc530a5ff2761,PodSandboxId:bee3609a504470e74917d47a74616ca3798ef90df0b6d23171ae00239775d808,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722457008537165511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vcsv5,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 474aef7b-6525-439f-baa8-801e799ea6a7,},Annotations:map[string]string{io.kubernetes.container.hash: 699f1ad0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cb70dfa50e8e788940f8a1dc034720fb6ac7ad4b9ccbc7338f3428637dab8b9,PodSandboxId:ddd0df2e6857e8fb0ac2f5fb7b3deb0327e935ebf77ea225542a00732dc05300,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722456989271588300,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b066168dce1fb13b29b0e5215f2e4c
17,},Annotations:map[string]string{io.kubernetes.container.hash: 7c0abe50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25141b1279c4b01c415837cb60597cae930af0d465ad070502ff71a3e82b4afb,PodSandboxId:3027554ccce1d345e3b6c8beb43cbee5573a4675e09e728533c0e6c178a996f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722456989234785043,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293234958929e5b2f40fcf9fe89f059c,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd55ce3db2a7d9e442823271d3bbfa8562e77c7ae881497975e66ea7e6547a6d,PodSandboxId:e2ffd4f3dae22fb8ff47764ca6c9f49bad4926aa0fee04c879f049bd513c68e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722456989237785971,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed18c908b1631740f056181e183d629b,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 3e59ec47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d62542ea5da5222f8b762ce76723d43560cda4cc13ef73726c65608d6ef6521,PodSandboxId:7d22a0fb42285db96ac26c365a427ad06da26d5291b1544cece1e6dc093ab549,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722456989177786195,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff917910c99be5ca87c83a0532756771,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3362304e-b35a-4319-945c-9f583aeeb9f9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:27:41 multinode-094885 crio[2971]: time="2024-07-31 20:27:41.989577246Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=10354b5f-557e-4ea2-95f5-ffb206a0cac0 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:27:41 multinode-094885 crio[2971]: time="2024-07-31 20:27:41.989672779Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=10354b5f-557e-4ea2-95f5-ffb206a0cac0 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:27:41 multinode-094885 crio[2971]: time="2024-07-31 20:27:41.991720479Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4990c5e8-db98-49ae-aaf2-f5548b0c327f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:27:41 multinode-094885 crio[2971]: time="2024-07-31 20:27:41.992351006Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722457661992322153,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4990c5e8-db98-49ae-aaf2-f5548b0c327f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:27:41 multinode-094885 crio[2971]: time="2024-07-31 20:27:41.992944731Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9c911ca1-047c-4cc5-8980-94d495e02ee3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:27:41 multinode-094885 crio[2971]: time="2024-07-31 20:27:41.993010290Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9c911ca1-047c-4cc5-8980-94d495e02ee3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:27:41 multinode-094885 crio[2971]: time="2024-07-31 20:27:41.993464701Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:15436fdd715785b635301ef11a649bb91a95d21320efe544557f270eace6df3f,PodSandboxId:1906f71f375b03f83f43c1528754732e1bfab0e9bbbf79523cacefa9ae27f715,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722457455146360216,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwlpt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49fb91bd-1c6c-4dfb-af51-7f1604463b26,},Annotations:map[string]string{io.kubernetes.container.hash: dba96ca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64ad53c3587b5568b79502878d17b766cf54e7c07bcc3fef95758ce5918270c3,PodSandboxId:8a835dbc9646b109d083034ec8434f3d59a55bb87e020ad62206eeaa03be1fca,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722457421633891789,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-glw6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9252257a-3126-4945-8013-bbd3a4c9f820,},Annotations:map[string]string{io.kubernetes.container.hash: 4a7249b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e9d628cf7a5a1a13be57953e39388980cf20ecbb7d664dc6876fb4361aa3c1,PodSandboxId:ee745ccf6b1f15ef5701ee62a3ed93b59dcb2e4d5aeb23a16c40b9ea0cfe93a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722457421587768075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sh4fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34113636-7979-4b54-bf2a-37c49178450d,},Annotations:map[string]string{io.kubernetes.container.hash: 63f8e4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97d9921b6b5b5e2e94ac6d8334ef0a99462ed8bb280be53483e354fc701ed19,PodSandboxId:9f1923274a45e66fdb7dba3d2b0ea4762ccdd3142a59f5b923bc4b6eee444280,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722457421465473809,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 360a1917-a5a8-4093-b355-c774cccc8548,},An
notations:map[string]string{io.kubernetes.container.hash: b389d224,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd55608813a0f0c36d2a388e76d2741aea9db7517c652637946f5d9ad76acd5,PodSandboxId:7775de6c551cfa697c443a1c10393f4987772634e09d1fe63430f301d84e5fc9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722457421417028604,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vcsv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474aef7b-6525-439f-baa8-801e799ea6a7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 699f1ad0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95e1203585db282d87e855e71382c41a4bb300ef267cff506afeb8117170c7b3,PodSandboxId:225fc91b7219deee45bab76b3fb7e7adf461f40fdaa6d410cda4871f1c90fc75,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722457416548041781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff917910c99be5ca87c83a0532756771,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:170e2ce2375b5d347dd27a7f6671e582c5e4f2eb1fa1be6c22012910ce5c5119,PodSandboxId:76048d11be0b39034995b7a3f5beb46372177f5072ed6a70450fdac83707a0da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722457416478564127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed18c908b1631740f056181e183d629b,},Annotations:map[string]string{io.kub
ernetes.container.hash: 3e59ec47,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ccb2a4daa9e2e76efe02e7d9f73767ad460acbd85dacbb0a3beacd058c19f85,PodSandboxId:a680464821f8e0e9fede1b3977123f951d9fb3c5ce53dec2842e5ce3272799ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722457416512645124,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293234958929e5b2f40fcf9fe89f059c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c155ec9b0f0ae3fae47802262dec33f8c36c8fd1727326b616cb03ec5e7c2f83,PodSandboxId:eaf65f3cefe47f366ebebdf1c1b7fd10b0193a8b4de60003ad3f5ed12ec7fcb8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722457416455729519,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b066168dce1fb13b29b0e5215f2e4c17,},Annotations:map[string]string{io.kubernetes.container.hash: 7c0abe50,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ccbc51911f88b2cea53f55b4e9226d72df1a15a63947dccb6900e21b71381fb,PodSandboxId:cb7c00fd54be8529819c7f6fbe71d0abd6baf362ef761c93c3fb16f926ae1a33,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722457402800817088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sh4fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34113636-7979-4b54-bf2a-37c49178450d,},Annotations:map[string]string{io.kubernetes.container.hash: 63f8e4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d04b8842076b61a34fd5b5ecfe3702a29f477a4e4f35542098783b88c33a82ca,PodSandboxId:2659ab64605639979224b37c4547b2024bc05913f45a9a7bb405ec83131ae9af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722457082072782326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwlpt,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49fb91bd-1c6c-4dfb-af51-7f1604463b26,},Annotations:map[string]string{io.kubernetes.container.hash: dba96ca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72e862487909d32a034d59c9ec722d2003e2ea4b858f2736fe642cce09f2c230,PodSandboxId:462cb7067877aab3a2ecfea2172d63ecd9b051871faa6d504453347a15e22619,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722457024342454654,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 360a1917-a5a8-4093-b355-c774cccc8548,},Annotations:map[string]string{io.kubernetes.container.hash: b389d224,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86082e9a17e18abeef414a96a2ce5e84ea762b3b9eae19e1e48e9a8b5d49804a,PodSandboxId:d53634fcc825315ef3f58ad427820c1422931f1314b749c2f36bf1d2a5d16d77,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722457012348726824,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-glw6d,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 9252257a-3126-4945-8013-bbd3a4c9f820,},Annotations:map[string]string{io.kubernetes.container.hash: 4a7249b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d7a4222195e163501ef8c970cf2272d4d92203c6b85fadf372dc530a5ff2761,PodSandboxId:bee3609a504470e74917d47a74616ca3798ef90df0b6d23171ae00239775d808,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722457008537165511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vcsv5,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 474aef7b-6525-439f-baa8-801e799ea6a7,},Annotations:map[string]string{io.kubernetes.container.hash: 699f1ad0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cb70dfa50e8e788940f8a1dc034720fb6ac7ad4b9ccbc7338f3428637dab8b9,PodSandboxId:ddd0df2e6857e8fb0ac2f5fb7b3deb0327e935ebf77ea225542a00732dc05300,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722456989271588300,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b066168dce1fb13b29b0e5215f2e4c
17,},Annotations:map[string]string{io.kubernetes.container.hash: 7c0abe50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25141b1279c4b01c415837cb60597cae930af0d465ad070502ff71a3e82b4afb,PodSandboxId:3027554ccce1d345e3b6c8beb43cbee5573a4675e09e728533c0e6c178a996f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722456989234785043,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293234958929e5b2f40fcf9fe89f059c,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd55ce3db2a7d9e442823271d3bbfa8562e77c7ae881497975e66ea7e6547a6d,PodSandboxId:e2ffd4f3dae22fb8ff47764ca6c9f49bad4926aa0fee04c879f049bd513c68e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722456989237785971,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed18c908b1631740f056181e183d629b,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 3e59ec47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d62542ea5da5222f8b762ce76723d43560cda4cc13ef73726c65608d6ef6521,PodSandboxId:7d22a0fb42285db96ac26c365a427ad06da26d5291b1544cece1e6dc093ab549,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722456989177786195,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff917910c99be5ca87c83a0532756771,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9c911ca1-047c-4cc5-8980-94d495e02ee3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:27:42 multinode-094885 crio[2971]: time="2024-07-31 20:27:42.032838899Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0bf59aa4-f20d-40c1-9534-31c5c4644fab name=/runtime.v1.RuntimeService/Version
	Jul 31 20:27:42 multinode-094885 crio[2971]: time="2024-07-31 20:27:42.032912700Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0bf59aa4-f20d-40c1-9534-31c5c4644fab name=/runtime.v1.RuntimeService/Version
	Jul 31 20:27:42 multinode-094885 crio[2971]: time="2024-07-31 20:27:42.034189077Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4202dcd7-67d4-4455-80d0-f515fb20b2e0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:27:42 multinode-094885 crio[2971]: time="2024-07-31 20:27:42.034704844Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722457662034682536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4202dcd7-67d4-4455-80d0-f515fb20b2e0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:27:42 multinode-094885 crio[2971]: time="2024-07-31 20:27:42.035318582Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8df9a6eb-6c7c-4e64-a4dd-f2f7331a5955 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:27:42 multinode-094885 crio[2971]: time="2024-07-31 20:27:42.035388984Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8df9a6eb-6c7c-4e64-a4dd-f2f7331a5955 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:27:42 multinode-094885 crio[2971]: time="2024-07-31 20:27:42.035732651Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:15436fdd715785b635301ef11a649bb91a95d21320efe544557f270eace6df3f,PodSandboxId:1906f71f375b03f83f43c1528754732e1bfab0e9bbbf79523cacefa9ae27f715,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722457455146360216,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwlpt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49fb91bd-1c6c-4dfb-af51-7f1604463b26,},Annotations:map[string]string{io.kubernetes.container.hash: dba96ca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64ad53c3587b5568b79502878d17b766cf54e7c07bcc3fef95758ce5918270c3,PodSandboxId:8a835dbc9646b109d083034ec8434f3d59a55bb87e020ad62206eeaa03be1fca,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722457421633891789,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-glw6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9252257a-3126-4945-8013-bbd3a4c9f820,},Annotations:map[string]string{io.kubernetes.container.hash: 4a7249b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e9d628cf7a5a1a13be57953e39388980cf20ecbb7d664dc6876fb4361aa3c1,PodSandboxId:ee745ccf6b1f15ef5701ee62a3ed93b59dcb2e4d5aeb23a16c40b9ea0cfe93a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722457421587768075,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sh4fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34113636-7979-4b54-bf2a-37c49178450d,},Annotations:map[string]string{io.kubernetes.container.hash: 63f8e4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a97d9921b6b5b5e2e94ac6d8334ef0a99462ed8bb280be53483e354fc701ed19,PodSandboxId:9f1923274a45e66fdb7dba3d2b0ea4762ccdd3142a59f5b923bc4b6eee444280,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722457421465473809,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 360a1917-a5a8-4093-b355-c774cccc8548,},An
notations:map[string]string{io.kubernetes.container.hash: b389d224,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bd55608813a0f0c36d2a388e76d2741aea9db7517c652637946f5d9ad76acd5,PodSandboxId:7775de6c551cfa697c443a1c10393f4987772634e09d1fe63430f301d84e5fc9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722457421417028604,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vcsv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 474aef7b-6525-439f-baa8-801e799ea6a7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 699f1ad0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95e1203585db282d87e855e71382c41a4bb300ef267cff506afeb8117170c7b3,PodSandboxId:225fc91b7219deee45bab76b3fb7e7adf461f40fdaa6d410cda4871f1c90fc75,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722457416548041781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff917910c99be5ca87c83a0532756771,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:170e2ce2375b5d347dd27a7f6671e582c5e4f2eb1fa1be6c22012910ce5c5119,PodSandboxId:76048d11be0b39034995b7a3f5beb46372177f5072ed6a70450fdac83707a0da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722457416478564127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed18c908b1631740f056181e183d629b,},Annotations:map[string]string{io.kub
ernetes.container.hash: 3e59ec47,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ccb2a4daa9e2e76efe02e7d9f73767ad460acbd85dacbb0a3beacd058c19f85,PodSandboxId:a680464821f8e0e9fede1b3977123f951d9fb3c5ce53dec2842e5ce3272799ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722457416512645124,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293234958929e5b2f40fcf9fe89f059c,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c155ec9b0f0ae3fae47802262dec33f8c36c8fd1727326b616cb03ec5e7c2f83,PodSandboxId:eaf65f3cefe47f366ebebdf1c1b7fd10b0193a8b4de60003ad3f5ed12ec7fcb8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722457416455729519,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b066168dce1fb13b29b0e5215f2e4c17,},Annotations:map[string]string{io.kubernetes.container.hash: 7c0abe50,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ccbc51911f88b2cea53f55b4e9226d72df1a15a63947dccb6900e21b71381fb,PodSandboxId:cb7c00fd54be8529819c7f6fbe71d0abd6baf362ef761c93c3fb16f926ae1a33,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722457402800817088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sh4fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34113636-7979-4b54-bf2a-37c49178450d,},Annotations:map[string]string{io.kubernetes.container.hash: 63f8e4c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d04b8842076b61a34fd5b5ecfe3702a29f477a4e4f35542098783b88c33a82ca,PodSandboxId:2659ab64605639979224b37c4547b2024bc05913f45a9a7bb405ec83131ae9af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722457082072782326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwlpt,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 49fb91bd-1c6c-4dfb-af51-7f1604463b26,},Annotations:map[string]string{io.kubernetes.container.hash: dba96ca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72e862487909d32a034d59c9ec722d2003e2ea4b858f2736fe642cce09f2c230,PodSandboxId:462cb7067877aab3a2ecfea2172d63ecd9b051871faa6d504453347a15e22619,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722457024342454654,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 360a1917-a5a8-4093-b355-c774cccc8548,},Annotations:map[string]string{io.kubernetes.container.hash: b389d224,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86082e9a17e18abeef414a96a2ce5e84ea762b3b9eae19e1e48e9a8b5d49804a,PodSandboxId:d53634fcc825315ef3f58ad427820c1422931f1314b749c2f36bf1d2a5d16d77,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722457012348726824,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-glw6d,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 9252257a-3126-4945-8013-bbd3a4c9f820,},Annotations:map[string]string{io.kubernetes.container.hash: 4a7249b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d7a4222195e163501ef8c970cf2272d4d92203c6b85fadf372dc530a5ff2761,PodSandboxId:bee3609a504470e74917d47a74616ca3798ef90df0b6d23171ae00239775d808,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722457008537165511,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vcsv5,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 474aef7b-6525-439f-baa8-801e799ea6a7,},Annotations:map[string]string{io.kubernetes.container.hash: 699f1ad0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cb70dfa50e8e788940f8a1dc034720fb6ac7ad4b9ccbc7338f3428637dab8b9,PodSandboxId:ddd0df2e6857e8fb0ac2f5fb7b3deb0327e935ebf77ea225542a00732dc05300,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722456989271588300,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b066168dce1fb13b29b0e5215f2e4c
17,},Annotations:map[string]string{io.kubernetes.container.hash: 7c0abe50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25141b1279c4b01c415837cb60597cae930af0d465ad070502ff71a3e82b4afb,PodSandboxId:3027554ccce1d345e3b6c8beb43cbee5573a4675e09e728533c0e6c178a996f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722456989234785043,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 293234958929e5b2f40fcf9fe89f059c,},Annotation
s:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd55ce3db2a7d9e442823271d3bbfa8562e77c7ae881497975e66ea7e6547a6d,PodSandboxId:e2ffd4f3dae22fb8ff47764ca6c9f49bad4926aa0fee04c879f049bd513c68e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722456989237785971,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed18c908b1631740f056181e183d629b,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 3e59ec47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d62542ea5da5222f8b762ce76723d43560cda4cc13ef73726c65608d6ef6521,PodSandboxId:7d22a0fb42285db96ac26c365a427ad06da26d5291b1544cece1e6dc093ab549,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722456989177786195,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-094885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff917910c99be5ca87c83a0532756771,},Annotations:m
ap[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8df9a6eb-6c7c-4e64-a4dd-f2f7331a5955 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	15436fdd71578       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   1906f71f375b0       busybox-fc5497c4f-wwlpt
	64ad53c3587b5       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      4 minutes ago       Running             kindnet-cni               1                   8a835dbc9646b       kindnet-glw6d
	a7e9d628cf7a5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   2                   ee745ccf6b1f1       coredns-7db6d8ff4d-sh4fx
	a97d9921b6b5b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   9f1923274a45e       storage-provisioner
	4bd55608813a0       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   7775de6c551cf       kube-proxy-vcsv5
	95e1203585db2       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   1                   225fc91b7219d       kube-controller-manager-multinode-094885
	7ccb2a4daa9e2       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   a680464821f8e       kube-scheduler-multinode-094885
	170e2ce2375b5       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            1                   76048d11be0b3       kube-apiserver-multinode-094885
	c155ec9b0f0ae       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   eaf65f3cefe47       etcd-multinode-094885
	7ccbc51911f88       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Exited              coredns                   1                   cb7c00fd54be8       coredns-7db6d8ff4d-sh4fx
	d04b8842076b6       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   2659ab6460563       busybox-fc5497c4f-wwlpt
	72e862487909d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   462cb7067877a       storage-provisioner
	86082e9a17e18       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    10 minutes ago      Exited              kindnet-cni               0                   d53634fcc8253       kindnet-glw6d
	4d7a4222195e1       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      10 minutes ago      Exited              kube-proxy                0                   bee3609a50447       kube-proxy-vcsv5
	3cb70dfa50e8e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      11 minutes ago      Exited              etcd                      0                   ddd0df2e6857e       etcd-multinode-094885
	bd55ce3db2a7d       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      11 minutes ago      Exited              kube-apiserver            0                   e2ffd4f3dae22       kube-apiserver-multinode-094885
	25141b1279c4b       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      11 minutes ago      Exited              kube-scheduler            0                   3027554ccce1d       kube-scheduler-multinode-094885
	1d62542ea5da5       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      11 minutes ago      Exited              kube-controller-manager   0                   7d22a0fb42285       kube-controller-manager-multinode-094885
	
	
	==> coredns [7ccbc51911f88b2cea53f55b4e9226d72df1a15a63947dccb6900e21b71381fb] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:38098 - 12928 "HINFO IN 8460983658922911469.4892911557547791121. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015202375s
	
	
	==> coredns [a7e9d628cf7a5a1a13be57953e39388980cf20ecbb7d664dc6876fb4361aa3c1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:60945 - 38704 "HINFO IN 8032362761859454990.3906396736182584970. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01329722s
	
	
	==> describe nodes <==
	Name:               multinode-094885
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-094885
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825
	                    minikube.k8s.io/name=multinode-094885
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T20_16_35_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 20:16:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-094885
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 20:27:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 20:23:40 +0000   Wed, 31 Jul 2024 20:16:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 20:23:40 +0000   Wed, 31 Jul 2024 20:16:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 20:23:40 +0000   Wed, 31 Jul 2024 20:16:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 20:23:40 +0000   Wed, 31 Jul 2024 20:17:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.193
	  Hostname:    multinode-094885
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ee7f996ff9584406978b08296319e67b
	  System UUID:                ee7f996f-f958-4406-978b-08296319e67b
	  Boot ID:                    2e0f464f-999d-45cb-8453-39f654e528b7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wwlpt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m43s
	  kube-system                 coredns-7db6d8ff4d-sh4fx                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-094885                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-glw6d                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-094885             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-multinode-094885    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-vcsv5                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-094885             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m                   kube-proxy       
	  Normal  NodeHasSufficientPID     11m                  kubelet          Node multinode-094885 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                  kubelet          Node multinode-094885 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                  kubelet          Node multinode-094885 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 11m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-094885 event: Registered Node multinode-094885 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-094885 status is now: NodeReady
	  Normal  Starting                 4m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m7s (x8 over 4m7s)  kubelet          Node multinode-094885 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m7s (x8 over 4m7s)  kubelet          Node multinode-094885 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m7s (x7 over 4m7s)  kubelet          Node multinode-094885 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m49s                node-controller  Node multinode-094885 event: Registered Node multinode-094885 in Controller
	
	
	Name:               multinode-094885-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-094885-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825
	                    minikube.k8s.io/name=multinode-094885
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T20_24_18_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 20:24:18 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-094885-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 20:25:18 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 31 Jul 2024 20:24:48 +0000   Wed, 31 Jul 2024 20:26:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 31 Jul 2024 20:24:48 +0000   Wed, 31 Jul 2024 20:26:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 31 Jul 2024 20:24:48 +0000   Wed, 31 Jul 2024 20:26:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 31 Jul 2024 20:24:48 +0000   Wed, 31 Jul 2024 20:26:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.211
	  Hostname:    multinode-094885-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8471c475397b4e6fabcd63873a1dced7
	  System UUID:                8471c475-397b-4e6f-abcd-63873a1dced7
	  Boot ID:                    c6d77d04-2134-42c5-a8e5-3a3f6010ec60
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pmhlm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 kindnet-w7fnj              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-g62ct           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m20s                  kube-proxy       
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-094885-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-094885-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-094885-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m46s                  kubelet          Node multinode-094885-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m24s (x2 over 3m24s)  kubelet          Node multinode-094885-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m24s (x2 over 3m24s)  kubelet          Node multinode-094885-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m24s (x2 over 3m24s)  kubelet          Node multinode-094885-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m6s                   kubelet          Node multinode-094885-m02 status is now: NodeReady
	  Normal  NodeNotReady             99s                    node-controller  Node multinode-094885-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.058164] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.195356] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.122081] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.285227] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.220028] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +4.174735] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +0.054729] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.990974] systemd-fstab-generator[1277]: Ignoring "noauto" option for root device
	[  +0.086164] kauditd_printk_skb: 69 callbacks suppressed
	[ +13.583485] systemd-fstab-generator[1467]: Ignoring "noauto" option for root device
	[  +0.110133] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.376545] kauditd_printk_skb: 56 callbacks suppressed
	[Jul31 20:17] kauditd_printk_skb: 12 callbacks suppressed
	[Jul31 20:23] systemd-fstab-generator[2781]: Ignoring "noauto" option for root device
	[  +0.134393] systemd-fstab-generator[2793]: Ignoring "noauto" option for root device
	[  +0.174483] systemd-fstab-generator[2807]: Ignoring "noauto" option for root device
	[  +0.132118] systemd-fstab-generator[2819]: Ignoring "noauto" option for root device
	[  +0.382515] systemd-fstab-generator[2939]: Ignoring "noauto" option for root device
	[ +10.722573] systemd-fstab-generator[3079]: Ignoring "noauto" option for root device
	[  +0.083948] kauditd_printk_skb: 110 callbacks suppressed
	[  +1.882554] systemd-fstab-generator[3219]: Ignoring "noauto" option for root device
	[  +5.734902] kauditd_printk_skb: 76 callbacks suppressed
	[ +12.227317] systemd-fstab-generator[4046]: Ignoring "noauto" option for root device
	[  +0.110520] kauditd_printk_skb: 32 callbacks suppressed
	[Jul31 20:24] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [3cb70dfa50e8e788940f8a1dc034720fb6ac7ad4b9ccbc7338f3428637dab8b9] <==
	{"level":"info","ts":"2024-07-31T20:16:29.797627Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T20:16:29.797667Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T20:16:29.801344Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T20:16:29.801438Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T20:16:29.795805Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.193:2379"}
	{"level":"warn","ts":"2024-07-31T20:17:36.229742Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.33051ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10517753453783015819 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-094885-m02.17e7659101756127\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-094885-m02.17e7659101756127\" value_size:646 lease:1294381416928240009 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-31T20:17:36.230194Z","caller":"traceutil/trace.go:171","msg":"trace[592286302] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"239.496548ms","start":"2024-07-31T20:17:35.990664Z","end":"2024-07-31T20:17:36.230161Z","steps":["trace[592286302] 'process raft request'  (duration: 90.075797ms)","trace[592286302] 'compare'  (duration: 148.08338ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T20:17:36.230602Z","caller":"traceutil/trace.go:171","msg":"trace[777462368] transaction","detail":"{read_only:false; response_revision:440; number_of_response:1; }","duration":"203.82761ms","start":"2024-07-31T20:17:36.026725Z","end":"2024-07-31T20:17:36.230553Z","steps":["trace[777462368] 'process raft request'  (duration: 203.361056ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T20:18:34.511753Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.068765ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10517753453783016285 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-094885-m03.17e7659e94e51b10\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-094885-m03.17e7659e94e51b10\" value_size:642 lease:1294381416928240009 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-31T20:18:34.512151Z","caller":"traceutil/trace.go:171","msg":"trace[1625623834] transaction","detail":"{read_only:false; response_revision:577; number_of_response:1; }","duration":"154.487095ms","start":"2024-07-31T20:18:34.357586Z","end":"2024-07-31T20:18:34.512073Z","steps":["trace[1625623834] 'process raft request'  (duration: 154.355749ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T20:18:34.51241Z","caller":"traceutil/trace.go:171","msg":"trace[447460658] transaction","detail":"{read_only:false; response_revision:576; number_of_response:1; }","duration":"220.023156ms","start":"2024-07-31T20:18:34.292373Z","end":"2024-07-31T20:18:34.512396Z","steps":["trace[447460658] 'process raft request'  (duration: 61.17617ms)","trace[447460658] 'compare'  (duration: 157.969169ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T20:18:37.748963Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"260.044594ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10517753453783016351 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-094885-m03.17e7659f4fa51335\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-094885-m03.17e7659f4fa51335\" value_size:629 lease:1294381416928240535 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-31T20:18:37.749385Z","caller":"traceutil/trace.go:171","msg":"trace[1938264959] transaction","detail":"{read_only:false; response_revision:609; number_of_response:1; }","duration":"344.121957ms","start":"2024-07-31T20:18:37.405095Z","end":"2024-07-31T20:18:37.749217Z","steps":["trace[1938264959] 'process raft request'  (duration: 83.768996ms)","trace[1938264959] 'compare'  (duration: 259.661316ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T20:18:37.749498Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T20:18:37.405079Z","time spent":"344.379339ms","remote":"127.0.0.1:46256","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":709,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/default/multinode-094885-m03.17e7659f4fa51335\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-094885-m03.17e7659f4fa51335\" value_size:629 lease:1294381416928240535 >> failure:<>"}
	{"level":"info","ts":"2024-07-31T20:18:37.749817Z","caller":"traceutil/trace.go:171","msg":"trace[982007433] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"254.015547ms","start":"2024-07-31T20:18:37.495794Z","end":"2024-07-31T20:18:37.74981Z","steps":["trace[982007433] 'process raft request'  (duration: 253.845164ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T20:21:50.685385Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-31T20:21:50.685547Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-094885","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.193:2380"],"advertise-client-urls":["https://192.168.39.193:2379"]}
	{"level":"warn","ts":"2024-07-31T20:21:50.685692Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T20:21:50.68578Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T20:21:50.772605Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.193:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T20:21:50.772696Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.193:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-31T20:21:50.772764Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"97ba5874d4d591f6","current-leader-member-id":"97ba5874d4d591f6"}
	{"level":"info","ts":"2024-07-31T20:21:50.775469Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.193:2380"}
	{"level":"info","ts":"2024-07-31T20:21:50.775688Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.193:2380"}
	{"level":"info","ts":"2024-07-31T20:21:50.775722Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-094885","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.193:2380"],"advertise-client-urls":["https://192.168.39.193:2379"]}
	
	
	==> etcd [c155ec9b0f0ae3fae47802262dec33f8c36c8fd1727326b616cb03ec5e7c2f83] <==
	{"level":"info","ts":"2024-07-31T20:23:36.935141Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T20:23:36.93754Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"97ba5874d4d591f6","initial-advertise-peer-urls":["https://192.168.39.193:2380"],"listen-peer-urls":["https://192.168.39.193:2380"],"advertise-client-urls":["https://192.168.39.193:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.193:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T20:23:36.937716Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T20:23:36.937833Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.193:2380"}
	{"level":"info","ts":"2024-07-31T20:23:36.937842Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.193:2380"}
	{"level":"info","ts":"2024-07-31T20:23:36.928549Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 switched to configuration voters=(10933148304205517302)"}
	{"level":"info","ts":"2024-07-31T20:23:36.939137Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9afeb12ac4c1a90a","local-member-id":"97ba5874d4d591f6","added-peer-id":"97ba5874d4d591f6","added-peer-peer-urls":["https://192.168.39.193:2380"]}
	{"level":"info","ts":"2024-07-31T20:23:36.939336Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9afeb12ac4c1a90a","local-member-id":"97ba5874d4d591f6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T20:23:36.939373Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T20:23:36.939001Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T20:23:36.957711Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T20:23:38.779631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T20:23:38.779748Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T20:23:38.779808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 received MsgPreVoteResp from 97ba5874d4d591f6 at term 2"}
	{"level":"info","ts":"2024-07-31T20:23:38.779838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T20:23:38.779863Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 received MsgVoteResp from 97ba5874d4d591f6 at term 3"}
	{"level":"info","ts":"2024-07-31T20:23:38.779889Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 became leader at term 3"}
	{"level":"info","ts":"2024-07-31T20:23:38.779919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 97ba5874d4d591f6 elected leader 97ba5874d4d591f6 at term 3"}
	{"level":"info","ts":"2024-07-31T20:23:38.785119Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"97ba5874d4d591f6","local-member-attributes":"{Name:multinode-094885 ClientURLs:[https://192.168.39.193:2379]}","request-path":"/0/members/97ba5874d4d591f6/attributes","cluster-id":"9afeb12ac4c1a90a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T20:23:38.785398Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T20:23:38.785447Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T20:23:38.785879Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T20:23:38.785925Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T20:23:38.787625Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T20:23:38.788064Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.193:2379"}
	
	
	==> kernel <==
	 20:27:42 up 11 min,  0 users,  load average: 0.44, 0.35, 0.19
	Linux multinode-094885 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [64ad53c3587b5568b79502878d17b766cf54e7c07bcc3fef95758ce5918270c3] <==
	I0731 20:26:32.669408       1 main.go:322] Node multinode-094885-m02 has CIDR [10.244.1.0/24] 
	I0731 20:26:42.661420       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 20:26:42.661535       1 main.go:299] handling current node
	I0731 20:26:42.661564       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0731 20:26:42.661582       1 main.go:322] Node multinode-094885-m02 has CIDR [10.244.1.0/24] 
	I0731 20:26:52.666177       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 20:26:52.666363       1 main.go:299] handling current node
	I0731 20:26:52.666413       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0731 20:26:52.666434       1 main.go:322] Node multinode-094885-m02 has CIDR [10.244.1.0/24] 
	I0731 20:27:02.661488       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 20:27:02.661545       1 main.go:299] handling current node
	I0731 20:27:02.661571       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0731 20:27:02.661577       1 main.go:322] Node multinode-094885-m02 has CIDR [10.244.1.0/24] 
	I0731 20:27:12.661357       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 20:27:12.661419       1 main.go:299] handling current node
	I0731 20:27:12.661437       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0731 20:27:12.661449       1 main.go:322] Node multinode-094885-m02 has CIDR [10.244.1.0/24] 
	I0731 20:27:22.668539       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 20:27:22.668638       1 main.go:299] handling current node
	I0731 20:27:22.668666       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0731 20:27:22.668671       1 main.go:322] Node multinode-094885-m02 has CIDR [10.244.1.0/24] 
	I0731 20:27:32.668435       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 20:27:32.668477       1 main.go:299] handling current node
	I0731 20:27:32.668499       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0731 20:27:32.668505       1 main.go:322] Node multinode-094885-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [86082e9a17e18abeef414a96a2ce5e84ea762b3b9eae19e1e48e9a8b5d49804a] <==
	I0731 20:21:03.462607       1 main.go:299] handling current node
	I0731 20:21:13.456312       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0731 20:21:13.456525       1 main.go:322] Node multinode-094885-m03 has CIDR [10.244.3.0/24] 
	I0731 20:21:13.456729       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 20:21:13.456760       1 main.go:299] handling current node
	I0731 20:21:13.456787       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0731 20:21:13.456812       1 main.go:322] Node multinode-094885-m02 has CIDR [10.244.1.0/24] 
	I0731 20:21:23.462076       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 20:21:23.462162       1 main.go:299] handling current node
	I0731 20:21:23.462192       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0731 20:21:23.462198       1 main.go:322] Node multinode-094885-m02 has CIDR [10.244.1.0/24] 
	I0731 20:21:23.462392       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0731 20:21:23.462418       1 main.go:322] Node multinode-094885-m03 has CIDR [10.244.3.0/24] 
	I0731 20:21:33.455660       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 20:21:33.455733       1 main.go:299] handling current node
	I0731 20:21:33.455749       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0731 20:21:33.455755       1 main.go:322] Node multinode-094885-m02 has CIDR [10.244.1.0/24] 
	I0731 20:21:33.455889       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0731 20:21:33.455912       1 main.go:322] Node multinode-094885-m03 has CIDR [10.244.3.0/24] 
	I0731 20:21:43.464371       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0731 20:21:43.464514       1 main.go:322] Node multinode-094885-m02 has CIDR [10.244.1.0/24] 
	I0731 20:21:43.464712       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0731 20:21:43.464754       1 main.go:322] Node multinode-094885-m03 has CIDR [10.244.3.0/24] 
	I0731 20:21:43.464840       1 main.go:295] Handling node with IPs: map[192.168.39.193:{}]
	I0731 20:21:43.464860       1 main.go:299] handling current node
	
	
	==> kube-apiserver [170e2ce2375b5d347dd27a7f6671e582c5e4f2eb1fa1be6c22012910ce5c5119] <==
	I0731 20:23:40.044919       1 establishing_controller.go:76] Starting EstablishingController
	I0731 20:23:40.044947       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0731 20:23:40.044990       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0731 20:23:40.045010       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0731 20:23:40.094753       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 20:23:40.094953       1 shared_informer.go:320] Caches are synced for configmaps
	I0731 20:23:40.095080       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 20:23:40.095559       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0731 20:23:40.095607       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0731 20:23:40.099779       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0731 20:23:40.102440       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	E0731 20:23:40.111002       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0731 20:23:40.117541       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 20:23:40.121945       1 cache.go:39] Caches are synced for autoregister controller
	I0731 20:23:40.146324       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 20:23:40.146402       1 policy_source.go:224] refreshing policies
	I0731 20:23:40.163569       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 20:23:41.007002       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 20:23:42.353986       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 20:23:42.481931       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 20:23:42.495894       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 20:23:42.580703       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 20:23:42.587063       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 20:23:53.388193       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 20:23:53.688719       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [bd55ce3db2a7d9e442823271d3bbfa8562e77c7ae881497975e66ea7e6547a6d] <==
	E0731 20:18:04.709119       1 conn.go:339] Error on socket receive: read tcp 192.168.39.193:8443->192.168.39.1:53440: use of closed network connection
	E0731 20:18:04.876811       1 conn.go:339] Error on socket receive: read tcp 192.168.39.193:8443->192.168.39.1:53456: use of closed network connection
	E0731 20:18:05.050967       1 conn.go:339] Error on socket receive: read tcp 192.168.39.193:8443->192.168.39.1:53476: use of closed network connection
	E0731 20:18:05.226571       1 conn.go:339] Error on socket receive: read tcp 192.168.39.193:8443->192.168.39.1:53486: use of closed network connection
	I0731 20:21:50.690108       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0731 20:21:50.703975       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0731 20:21:50.705585       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0731 20:21:50.706628       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 20:21:50.707822       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 20:21:50.708013       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 20:21:50.708048       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 20:21:50.708184       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 20:21:50.708215       1 logging.go:59] [core] [Channel #6 SubChannel #7] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 20:21:50.709282       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 20:21:50.709327       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 20:21:50.709650       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 20:21:50.709849       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 20:21:50.709949       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 20:21:50.710160       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 20:21:50.710343       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 20:21:50.710451       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 20:21:50.710557       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 20:21:50.710864       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 20:21:50.712088       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0731 20:21:50.712435       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	
	
	==> kube-controller-manager [1d62542ea5da5222f8b762ce76723d43560cda4cc13ef73726c65608d6ef6521] <==
	I0731 20:17:36.234914       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-094885-m02\" does not exist"
	I0731 20:17:36.248769       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-094885-m02" podCIDRs=["10.244.1.0/24"]
	I0731 20:17:37.378473       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-094885-m02"
	I0731 20:17:56.867194       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-094885-m02"
	I0731 20:17:59.072287       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.152173ms"
	I0731 20:17:59.095540       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.052525ms"
	I0731 20:17:59.095648       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.227µs"
	I0731 20:17:59.095713       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.043µs"
	I0731 20:18:02.653226       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.674767ms"
	I0731 20:18:02.653725       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.07µs"
	I0731 20:18:03.126879       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.210578ms"
	I0731 20:18:03.127026       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.757µs"
	I0731 20:18:34.519580       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-094885-m02"
	I0731 20:18:34.519958       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-094885-m03\" does not exist"
	I0731 20:18:34.536858       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-094885-m03" podCIDRs=["10.244.2.0/24"]
	I0731 20:18:37.403056       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-094885-m03"
	I0731 20:18:55.129426       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-094885-m02"
	I0731 20:19:24.360520       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-094885-m02"
	I0731 20:19:25.569329       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-094885-m03\" does not exist"
	I0731 20:19:25.569573       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-094885-m02"
	I0731 20:19:25.586530       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-094885-m03" podCIDRs=["10.244.3.0/24"]
	I0731 20:19:45.141447       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-094885-m02"
	I0731 20:20:27.464490       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-094885-m02"
	I0731 20:20:32.554760       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.191405ms"
	I0731 20:20:32.555475       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.059µs"
	
	
	==> kube-controller-manager [95e1203585db282d87e855e71382c41a4bb300ef267cff506afeb8117170c7b3] <==
	I0731 20:24:18.131785       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-094885-m02" podCIDRs=["10.244.1.0/24"]
	I0731 20:24:19.049534       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.809µs"
	I0731 20:24:19.064344       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.04µs"
	I0731 20:24:19.073774       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.207µs"
	I0731 20:24:19.088173       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.493µs"
	I0731 20:24:19.095997       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="95.904µs"
	I0731 20:24:19.100814       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.613µs"
	I0731 20:24:23.650146       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.903µs"
	I0731 20:24:36.618847       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-094885-m02"
	I0731 20:24:36.639664       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="86.239µs"
	I0731 20:24:36.655381       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.418µs"
	I0731 20:24:40.452863       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.03308ms"
	I0731 20:24:40.453085       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.991µs"
	I0731 20:24:54.818891       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-094885-m02"
	I0731 20:24:55.913882       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-094885-m03\" does not exist"
	I0731 20:24:55.915681       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-094885-m02"
	I0731 20:24:55.938389       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-094885-m03" podCIDRs=["10.244.2.0/24"]
	I0731 20:25:15.656525       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-094885-m02"
	I0731 20:25:20.935911       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-094885-m02"
	I0731 20:26:03.522186       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.013665ms"
	I0731 20:26:03.522335       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.101µs"
	I0731 20:26:13.428199       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-68dx5"
	I0731 20:26:13.452902       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-68dx5"
	I0731 20:26:13.453047       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-mpj87"
	I0731 20:26:13.474878       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-mpj87"
	
	
	==> kube-proxy [4bd55608813a0f0c36d2a388e76d2741aea9db7517c652637946f5d9ad76acd5] <==
	I0731 20:23:41.710109       1 server_linux.go:69] "Using iptables proxy"
	I0731 20:23:41.737657       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.193"]
	I0731 20:23:41.804156       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 20:23:41.804305       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 20:23:41.804325       1 server_linux.go:165] "Using iptables Proxier"
	I0731 20:23:41.807058       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 20:23:41.807599       1 server.go:872] "Version info" version="v1.30.3"
	I0731 20:23:41.807637       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 20:23:41.809712       1 config.go:192] "Starting service config controller"
	I0731 20:23:41.809744       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 20:23:41.809772       1 config.go:101] "Starting endpoint slice config controller"
	I0731 20:23:41.809776       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 20:23:41.810184       1 config.go:319] "Starting node config controller"
	I0731 20:23:41.810225       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 20:23:41.910811       1 shared_informer.go:320] Caches are synced for node config
	I0731 20:23:41.910889       1 shared_informer.go:320] Caches are synced for service config
	I0731 20:23:41.910922       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [4d7a4222195e163501ef8c970cf2272d4d92203c6b85fadf372dc530a5ff2761] <==
	I0731 20:16:48.835290       1 server_linux.go:69] "Using iptables proxy"
	I0731 20:16:48.850192       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.193"]
	I0731 20:16:48.889489       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 20:16:48.889551       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 20:16:48.889567       1 server_linux.go:165] "Using iptables Proxier"
	I0731 20:16:48.892347       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 20:16:48.892598       1 server.go:872] "Version info" version="v1.30.3"
	I0731 20:16:48.892628       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 20:16:48.895350       1 config.go:192] "Starting service config controller"
	I0731 20:16:48.895592       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 20:16:48.895923       1 config.go:101] "Starting endpoint slice config controller"
	I0731 20:16:48.895932       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 20:16:48.898421       1 config.go:319] "Starting node config controller"
	I0731 20:16:48.898446       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 20:16:48.996094       1 shared_informer.go:320] Caches are synced for service config
	I0731 20:16:48.996094       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 20:16:48.998704       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [25141b1279c4b01c415837cb60597cae930af0d465ad070502ff71a3e82b4afb] <==
	E0731 20:16:31.735906       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 20:16:32.547604       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 20:16:32.547738       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 20:16:32.583434       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 20:16:32.583527       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 20:16:32.613467       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 20:16:32.613557       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 20:16:32.657763       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 20:16:32.657792       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 20:16:32.659827       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 20:16:32.659877       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 20:16:32.770205       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 20:16:32.770315       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 20:16:32.859076       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 20:16:32.859140       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 20:16:32.921452       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 20:16:32.921499       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 20:16:32.941003       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 20:16:32.941151       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 20:16:33.061288       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 20:16:33.061336       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 20:16:33.064549       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 20:16:33.064624       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0731 20:16:34.330734       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0731 20:21:50.687099       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [7ccb2a4daa9e2e76efe02e7d9f73767ad460acbd85dacbb0a3beacd058c19f85] <==
	I0731 20:23:37.605960       1 serving.go:380] Generated self-signed cert in-memory
	I0731 20:23:40.083371       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0731 20:23:40.083430       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 20:23:40.087125       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 20:23:40.087211       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0731 20:23:40.087218       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0731 20:23:40.087300       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 20:23:40.090607       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 20:23:40.090638       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 20:23:40.090653       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0731 20:23:40.090659       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0731 20:23:40.188148       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0731 20:23:40.191661       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0731 20:23:40.191720       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 20:23:40 multinode-094885 kubelet[3226]: I0731 20:23:40.866991    3226 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9252257a-3126-4945-8013-bbd3a4c9f820-xtables-lock\") pod \"kindnet-glw6d\" (UID: \"9252257a-3126-4945-8013-bbd3a4c9f820\") " pod="kube-system/kindnet-glw6d"
	Jul 31 20:23:40 multinode-094885 kubelet[3226]: I0731 20:23:40.867087    3226 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9252257a-3126-4945-8013-bbd3a4c9f820-lib-modules\") pod \"kindnet-glw6d\" (UID: \"9252257a-3126-4945-8013-bbd3a4c9f820\") " pod="kube-system/kindnet-glw6d"
	Jul 31 20:23:40 multinode-094885 kubelet[3226]: I0731 20:23:40.867191    3226 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9252257a-3126-4945-8013-bbd3a4c9f820-cni-cfg\") pod \"kindnet-glw6d\" (UID: \"9252257a-3126-4945-8013-bbd3a4c9f820\") " pod="kube-system/kindnet-glw6d"
	Jul 31 20:23:40 multinode-094885 kubelet[3226]: I0731 20:23:40.867742    3226 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/360a1917-a5a8-4093-b355-c774cccc8548-tmp\") pod \"storage-provisioner\" (UID: \"360a1917-a5a8-4093-b355-c774cccc8548\") " pod="kube-system/storage-provisioner"
	Jul 31 20:23:43 multinode-094885 kubelet[3226]: I0731 20:23:43.478173    3226 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 31 20:24:35 multinode-094885 kubelet[3226]: E0731 20:24:35.870862    3226 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 20:24:35 multinode-094885 kubelet[3226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 20:24:35 multinode-094885 kubelet[3226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 20:24:35 multinode-094885 kubelet[3226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 20:24:35 multinode-094885 kubelet[3226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 20:25:35 multinode-094885 kubelet[3226]: E0731 20:25:35.871687    3226 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 20:25:35 multinode-094885 kubelet[3226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 20:25:35 multinode-094885 kubelet[3226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 20:25:35 multinode-094885 kubelet[3226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 20:25:35 multinode-094885 kubelet[3226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 20:26:35 multinode-094885 kubelet[3226]: E0731 20:26:35.874750    3226 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 20:26:35 multinode-094885 kubelet[3226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 20:26:35 multinode-094885 kubelet[3226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 20:26:35 multinode-094885 kubelet[3226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 20:26:35 multinode-094885 kubelet[3226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 20:27:35 multinode-094885 kubelet[3226]: E0731 20:27:35.873613    3226 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 20:27:35 multinode-094885 kubelet[3226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 20:27:35 multinode-094885 kubelet[3226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 20:27:35 multinode-094885 kubelet[3226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 20:27:35 multinode-094885 kubelet[3226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 20:27:41.618664  160610 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19355-121704/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-094885 -n multinode-094885
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-094885 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.28s)

                                                
                                    
x
+
TestPreload (256.7s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-520960 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0731 20:32:17.627024  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
E0731 20:32:34.580219  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-520960 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m52.756844751s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-520960 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-520960 image pull gcr.io/k8s-minikube/busybox: (2.6352046s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-520960
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-520960: (7.285361319s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-520960 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0731 20:35:09.825989  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-520960 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m11.003174759s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-520960 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-07-31 20:35:47.823946322 +0000 UTC m=+4120.074304157
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-520960 -n test-preload-520960
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-520960 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-520960 logs -n 25: (1.083358867s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-094885 ssh -n                                                                 | multinode-094885     | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | multinode-094885-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-094885 ssh -n multinode-094885 sudo cat                                       | multinode-094885     | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | /home/docker/cp-test_multinode-094885-m03_multinode-094885.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-094885 cp multinode-094885-m03:/home/docker/cp-test.txt                       | multinode-094885     | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | multinode-094885-m02:/home/docker/cp-test_multinode-094885-m03_multinode-094885-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-094885 ssh -n                                                                 | multinode-094885     | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | multinode-094885-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-094885 ssh -n multinode-094885-m02 sudo cat                                   | multinode-094885     | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | /home/docker/cp-test_multinode-094885-m03_multinode-094885-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-094885 node stop m03                                                          | multinode-094885     | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	| node    | multinode-094885 node start                                                             | multinode-094885     | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC | 31 Jul 24 20:19 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-094885                                                                | multinode-094885     | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC |                     |
	| stop    | -p multinode-094885                                                                     | multinode-094885     | jenkins | v1.33.1 | 31 Jul 24 20:19 UTC |                     |
	| start   | -p multinode-094885                                                                     | multinode-094885     | jenkins | v1.33.1 | 31 Jul 24 20:21 UTC | 31 Jul 24 20:25 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-094885                                                                | multinode-094885     | jenkins | v1.33.1 | 31 Jul 24 20:25 UTC |                     |
	| node    | multinode-094885 node delete                                                            | multinode-094885     | jenkins | v1.33.1 | 31 Jul 24 20:25 UTC | 31 Jul 24 20:25 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-094885 stop                                                                   | multinode-094885     | jenkins | v1.33.1 | 31 Jul 24 20:25 UTC |                     |
	| start   | -p multinode-094885                                                                     | multinode-094885     | jenkins | v1.33.1 | 31 Jul 24 20:27 UTC | 31 Jul 24 20:30 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-094885                                                                | multinode-094885     | jenkins | v1.33.1 | 31 Jul 24 20:30 UTC |                     |
	| start   | -p multinode-094885-m02                                                                 | multinode-094885-m02 | jenkins | v1.33.1 | 31 Jul 24 20:30 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-094885-m03                                                                 | multinode-094885-m03 | jenkins | v1.33.1 | 31 Jul 24 20:30 UTC | 31 Jul 24 20:31 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-094885                                                                 | multinode-094885     | jenkins | v1.33.1 | 31 Jul 24 20:31 UTC |                     |
	| delete  | -p multinode-094885-m03                                                                 | multinode-094885-m03 | jenkins | v1.33.1 | 31 Jul 24 20:31 UTC | 31 Jul 24 20:31 UTC |
	| delete  | -p multinode-094885                                                                     | multinode-094885     | jenkins | v1.33.1 | 31 Jul 24 20:31 UTC | 31 Jul 24 20:31 UTC |
	| start   | -p test-preload-520960                                                                  | test-preload-520960  | jenkins | v1.33.1 | 31 Jul 24 20:31 UTC | 31 Jul 24 20:34 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-520960 image pull                                                          | test-preload-520960  | jenkins | v1.33.1 | 31 Jul 24 20:34 UTC | 31 Jul 24 20:34 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-520960                                                                  | test-preload-520960  | jenkins | v1.33.1 | 31 Jul 24 20:34 UTC | 31 Jul 24 20:34 UTC |
	| start   | -p test-preload-520960                                                                  | test-preload-520960  | jenkins | v1.33.1 | 31 Jul 24 20:34 UTC | 31 Jul 24 20:35 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-520960 image list                                                          | test-preload-520960  | jenkins | v1.33.1 | 31 Jul 24 20:35 UTC | 31 Jul 24 20:35 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 20:34:36
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 20:34:36.639931  163365 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:34:36.640051  163365 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:34:36.640061  163365 out.go:304] Setting ErrFile to fd 2...
	I0731 20:34:36.640068  163365 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:34:36.640246  163365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 20:34:36.640792  163365 out.go:298] Setting JSON to false
	I0731 20:34:36.641725  163365 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8213,"bootTime":1722449864,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 20:34:36.641779  163365 start.go:139] virtualization: kvm guest
	I0731 20:34:36.644012  163365 out.go:177] * [test-preload-520960] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 20:34:36.645312  163365 notify.go:220] Checking for updates...
	I0731 20:34:36.645353  163365 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 20:34:36.646781  163365 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 20:34:36.648011  163365 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 20:34:36.649419  163365 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 20:34:36.650934  163365 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 20:34:36.652238  163365 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 20:34:36.653924  163365 config.go:182] Loaded profile config "test-preload-520960": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0731 20:34:36.654348  163365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:34:36.654399  163365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:34:36.668821  163365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39987
	I0731 20:34:36.669233  163365 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:34:36.669781  163365 main.go:141] libmachine: Using API Version  1
	I0731 20:34:36.669802  163365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:34:36.670162  163365 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:34:36.670339  163365 main.go:141] libmachine: (test-preload-520960) Calling .DriverName
	I0731 20:34:36.672193  163365 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 20:34:36.673412  163365 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 20:34:36.673698  163365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:34:36.673729  163365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:34:36.688065  163365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41533
	I0731 20:34:36.688474  163365 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:34:36.689004  163365 main.go:141] libmachine: Using API Version  1
	I0731 20:34:36.689024  163365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:34:36.689375  163365 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:34:36.689595  163365 main.go:141] libmachine: (test-preload-520960) Calling .DriverName
	I0731 20:34:36.723353  163365 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 20:34:36.724710  163365 start.go:297] selected driver: kvm2
	I0731 20:34:36.724721  163365 start.go:901] validating driver "kvm2" against &{Name:test-preload-520960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-520960 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.177 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:34:36.724836  163365 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 20:34:36.725512  163365 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:34:36.725604  163365 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-121704/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 20:34:36.740642  163365 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 20:34:36.740973  163365 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 20:34:36.741035  163365 cni.go:84] Creating CNI manager for ""
	I0731 20:34:36.741049  163365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:34:36.741099  163365 start.go:340] cluster config:
	{Name:test-preload-520960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-520960 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.177 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:34:36.741205  163365 iso.go:125] acquiring lock: {Name:mk69fc0fe37180b19ce91307b5e12ab2d0bd69fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:34:36.743736  163365 out.go:177] * Starting "test-preload-520960" primary control-plane node in "test-preload-520960" cluster
	I0731 20:34:36.744978  163365 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0731 20:34:37.303317  163365 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0731 20:34:37.303351  163365 cache.go:56] Caching tarball of preloaded images
	I0731 20:34:37.303519  163365 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0731 20:34:37.305455  163365 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0731 20:34:37.306859  163365 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0731 20:34:37.418790  163365 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0731 20:34:50.287247  163365 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0731 20:34:50.287349  163365 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0731 20:34:51.256926  163365 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0731 20:34:51.257054  163365 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/test-preload-520960/config.json ...
	I0731 20:34:51.257285  163365 start.go:360] acquireMachinesLock for test-preload-520960: {Name:mk0ee20c9dba367bb5e62f6affdfe6f589095d2a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 20:34:51.257380  163365 start.go:364] duration metric: took 71.737µs to acquireMachinesLock for "test-preload-520960"
	I0731 20:34:51.257401  163365 start.go:96] Skipping create...Using existing machine configuration
	I0731 20:34:51.257407  163365 fix.go:54] fixHost starting: 
	I0731 20:34:51.257737  163365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:34:51.257777  163365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:34:51.272653  163365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41421
	I0731 20:34:51.273149  163365 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:34:51.273595  163365 main.go:141] libmachine: Using API Version  1
	I0731 20:34:51.273619  163365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:34:51.273966  163365 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:34:51.274160  163365 main.go:141] libmachine: (test-preload-520960) Calling .DriverName
	I0731 20:34:51.274341  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetState
	I0731 20:34:51.276153  163365 fix.go:112] recreateIfNeeded on test-preload-520960: state=Stopped err=<nil>
	I0731 20:34:51.276177  163365 main.go:141] libmachine: (test-preload-520960) Calling .DriverName
	W0731 20:34:51.276347  163365 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 20:34:51.278796  163365 out.go:177] * Restarting existing kvm2 VM for "test-preload-520960" ...
	I0731 20:34:51.279974  163365 main.go:141] libmachine: (test-preload-520960) Calling .Start
	I0731 20:34:51.280132  163365 main.go:141] libmachine: (test-preload-520960) Ensuring networks are active...
	I0731 20:34:51.280975  163365 main.go:141] libmachine: (test-preload-520960) Ensuring network default is active
	I0731 20:34:51.281250  163365 main.go:141] libmachine: (test-preload-520960) Ensuring network mk-test-preload-520960 is active
	I0731 20:34:51.281676  163365 main.go:141] libmachine: (test-preload-520960) Getting domain xml...
	I0731 20:34:51.282575  163365 main.go:141] libmachine: (test-preload-520960) Creating domain...
	I0731 20:34:52.475237  163365 main.go:141] libmachine: (test-preload-520960) Waiting to get IP...
	I0731 20:34:52.476183  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:34:52.476579  163365 main.go:141] libmachine: (test-preload-520960) DBG | unable to find current IP address of domain test-preload-520960 in network mk-test-preload-520960
	I0731 20:34:52.476652  163365 main.go:141] libmachine: (test-preload-520960) DBG | I0731 20:34:52.476570  163432 retry.go:31] will retry after 297.515604ms: waiting for machine to come up
	I0731 20:34:52.776336  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:34:52.776738  163365 main.go:141] libmachine: (test-preload-520960) DBG | unable to find current IP address of domain test-preload-520960 in network mk-test-preload-520960
	I0731 20:34:52.776765  163365 main.go:141] libmachine: (test-preload-520960) DBG | I0731 20:34:52.776681  163432 retry.go:31] will retry after 355.854868ms: waiting for machine to come up
	I0731 20:34:53.134411  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:34:53.134769  163365 main.go:141] libmachine: (test-preload-520960) DBG | unable to find current IP address of domain test-preload-520960 in network mk-test-preload-520960
	I0731 20:34:53.134794  163365 main.go:141] libmachine: (test-preload-520960) DBG | I0731 20:34:53.134749  163432 retry.go:31] will retry after 400.369963ms: waiting for machine to come up
	I0731 20:34:53.536304  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:34:53.536726  163365 main.go:141] libmachine: (test-preload-520960) DBG | unable to find current IP address of domain test-preload-520960 in network mk-test-preload-520960
	I0731 20:34:53.536756  163365 main.go:141] libmachine: (test-preload-520960) DBG | I0731 20:34:53.536665  163432 retry.go:31] will retry after 410.403615ms: waiting for machine to come up
	I0731 20:34:53.948281  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:34:53.948718  163365 main.go:141] libmachine: (test-preload-520960) DBG | unable to find current IP address of domain test-preload-520960 in network mk-test-preload-520960
	I0731 20:34:53.948743  163365 main.go:141] libmachine: (test-preload-520960) DBG | I0731 20:34:53.948682  163432 retry.go:31] will retry after 582.064615ms: waiting for machine to come up
	I0731 20:34:54.532330  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:34:54.532735  163365 main.go:141] libmachine: (test-preload-520960) DBG | unable to find current IP address of domain test-preload-520960 in network mk-test-preload-520960
	I0731 20:34:54.532765  163365 main.go:141] libmachine: (test-preload-520960) DBG | I0731 20:34:54.532681  163432 retry.go:31] will retry after 949.971011ms: waiting for machine to come up
	I0731 20:34:55.483871  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:34:55.484289  163365 main.go:141] libmachine: (test-preload-520960) DBG | unable to find current IP address of domain test-preload-520960 in network mk-test-preload-520960
	I0731 20:34:55.484315  163365 main.go:141] libmachine: (test-preload-520960) DBG | I0731 20:34:55.484256  163432 retry.go:31] will retry after 1.117692911s: waiting for machine to come up
	I0731 20:34:56.604060  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:34:56.604423  163365 main.go:141] libmachine: (test-preload-520960) DBG | unable to find current IP address of domain test-preload-520960 in network mk-test-preload-520960
	I0731 20:34:56.604456  163365 main.go:141] libmachine: (test-preload-520960) DBG | I0731 20:34:56.604386  163432 retry.go:31] will retry after 1.095122019s: waiting for machine to come up
	I0731 20:34:57.701656  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:34:57.702015  163365 main.go:141] libmachine: (test-preload-520960) DBG | unable to find current IP address of domain test-preload-520960 in network mk-test-preload-520960
	I0731 20:34:57.702045  163365 main.go:141] libmachine: (test-preload-520960) DBG | I0731 20:34:57.701962  163432 retry.go:31] will retry after 1.390202212s: waiting for machine to come up
	I0731 20:34:59.094391  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:34:59.094839  163365 main.go:141] libmachine: (test-preload-520960) DBG | unable to find current IP address of domain test-preload-520960 in network mk-test-preload-520960
	I0731 20:34:59.094871  163365 main.go:141] libmachine: (test-preload-520960) DBG | I0731 20:34:59.094781  163432 retry.go:31] will retry after 1.886635698s: waiting for machine to come up
	I0731 20:35:00.983828  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:00.984264  163365 main.go:141] libmachine: (test-preload-520960) DBG | unable to find current IP address of domain test-preload-520960 in network mk-test-preload-520960
	I0731 20:35:00.984288  163365 main.go:141] libmachine: (test-preload-520960) DBG | I0731 20:35:00.984215  163432 retry.go:31] will retry after 2.366286875s: waiting for machine to come up
	I0731 20:35:03.353179  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:03.353580  163365 main.go:141] libmachine: (test-preload-520960) DBG | unable to find current IP address of domain test-preload-520960 in network mk-test-preload-520960
	I0731 20:35:03.353608  163365 main.go:141] libmachine: (test-preload-520960) DBG | I0731 20:35:03.353529  163432 retry.go:31] will retry after 3.352176465s: waiting for machine to come up
	I0731 20:35:06.707048  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:06.707474  163365 main.go:141] libmachine: (test-preload-520960) DBG | unable to find current IP address of domain test-preload-520960 in network mk-test-preload-520960
	I0731 20:35:06.707501  163365 main.go:141] libmachine: (test-preload-520960) DBG | I0731 20:35:06.707427  163432 retry.go:31] will retry after 3.928236251s: waiting for machine to come up
	I0731 20:35:10.640561  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:10.641110  163365 main.go:141] libmachine: (test-preload-520960) Found IP for machine: 192.168.39.177
	I0731 20:35:10.641135  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has current primary IP address 192.168.39.177 and MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:10.641142  163365 main.go:141] libmachine: (test-preload-520960) Reserving static IP address...
	I0731 20:35:10.641607  163365 main.go:141] libmachine: (test-preload-520960) DBG | found host DHCP lease matching {name: "test-preload-520960", mac: "52:54:00:d7:fe:48", ip: "192.168.39.177"} in network mk-test-preload-520960: {Iface:virbr1 ExpiryTime:2024-07-31 21:35:02 +0000 UTC Type:0 Mac:52:54:00:d7:fe:48 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-520960 Clientid:01:52:54:00:d7:fe:48}
	I0731 20:35:10.641636  163365 main.go:141] libmachine: (test-preload-520960) DBG | skip adding static IP to network mk-test-preload-520960 - found existing host DHCP lease matching {name: "test-preload-520960", mac: "52:54:00:d7:fe:48", ip: "192.168.39.177"}
	I0731 20:35:10.641651  163365 main.go:141] libmachine: (test-preload-520960) Reserved static IP address: 192.168.39.177
	I0731 20:35:10.641667  163365 main.go:141] libmachine: (test-preload-520960) Waiting for SSH to be available...
	I0731 20:35:10.641683  163365 main.go:141] libmachine: (test-preload-520960) DBG | Getting to WaitForSSH function...
	I0731 20:35:10.643699  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:10.644094  163365 main.go:141] libmachine: (test-preload-520960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:fe:48", ip: ""} in network mk-test-preload-520960: {Iface:virbr1 ExpiryTime:2024-07-31 21:35:02 +0000 UTC Type:0 Mac:52:54:00:d7:fe:48 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-520960 Clientid:01:52:54:00:d7:fe:48}
	I0731 20:35:10.644129  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined IP address 192.168.39.177 and MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:10.644275  163365 main.go:141] libmachine: (test-preload-520960) DBG | Using SSH client type: external
	I0731 20:35:10.644304  163365 main.go:141] libmachine: (test-preload-520960) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/test-preload-520960/id_rsa (-rw-------)
	I0731 20:35:10.644351  163365 main.go:141] libmachine: (test-preload-520960) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.177 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/test-preload-520960/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:35:10.644370  163365 main.go:141] libmachine: (test-preload-520960) DBG | About to run SSH command:
	I0731 20:35:10.644390  163365 main.go:141] libmachine: (test-preload-520960) DBG | exit 0
	I0731 20:35:10.769435  163365 main.go:141] libmachine: (test-preload-520960) DBG | SSH cmd err, output: <nil>: 
	I0731 20:35:10.769786  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetConfigRaw
	I0731 20:35:10.770408  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetIP
	I0731 20:35:10.772816  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:10.773257  163365 main.go:141] libmachine: (test-preload-520960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:fe:48", ip: ""} in network mk-test-preload-520960: {Iface:virbr1 ExpiryTime:2024-07-31 21:35:02 +0000 UTC Type:0 Mac:52:54:00:d7:fe:48 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-520960 Clientid:01:52:54:00:d7:fe:48}
	I0731 20:35:10.773290  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined IP address 192.168.39.177 and MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:10.773502  163365 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/test-preload-520960/config.json ...
	I0731 20:35:10.773739  163365 machine.go:94] provisionDockerMachine start ...
	I0731 20:35:10.773762  163365 main.go:141] libmachine: (test-preload-520960) Calling .DriverName
	I0731 20:35:10.773977  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHHostname
	I0731 20:35:10.775821  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:10.776204  163365 main.go:141] libmachine: (test-preload-520960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:fe:48", ip: ""} in network mk-test-preload-520960: {Iface:virbr1 ExpiryTime:2024-07-31 21:35:02 +0000 UTC Type:0 Mac:52:54:00:d7:fe:48 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-520960 Clientid:01:52:54:00:d7:fe:48}
	I0731 20:35:10.776231  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined IP address 192.168.39.177 and MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:10.776470  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHPort
	I0731 20:35:10.776644  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHKeyPath
	I0731 20:35:10.776798  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHKeyPath
	I0731 20:35:10.776917  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHUsername
	I0731 20:35:10.777062  163365 main.go:141] libmachine: Using SSH client type: native
	I0731 20:35:10.777244  163365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0731 20:35:10.777254  163365 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 20:35:10.877612  163365 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 20:35:10.877647  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetMachineName
	I0731 20:35:10.877892  163365 buildroot.go:166] provisioning hostname "test-preload-520960"
	I0731 20:35:10.877917  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetMachineName
	I0731 20:35:10.878058  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHHostname
	I0731 20:35:10.880541  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:10.880879  163365 main.go:141] libmachine: (test-preload-520960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:fe:48", ip: ""} in network mk-test-preload-520960: {Iface:virbr1 ExpiryTime:2024-07-31 21:35:02 +0000 UTC Type:0 Mac:52:54:00:d7:fe:48 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-520960 Clientid:01:52:54:00:d7:fe:48}
	I0731 20:35:10.880907  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined IP address 192.168.39.177 and MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:10.881024  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHPort
	I0731 20:35:10.881211  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHKeyPath
	I0731 20:35:10.881366  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHKeyPath
	I0731 20:35:10.881549  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHUsername
	I0731 20:35:10.881704  163365 main.go:141] libmachine: Using SSH client type: native
	I0731 20:35:10.881896  163365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0731 20:35:10.881912  163365 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-520960 && echo "test-preload-520960" | sudo tee /etc/hostname
	I0731 20:35:11.000336  163365 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-520960
	
	I0731 20:35:11.000369  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHHostname
	I0731 20:35:11.003238  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:11.003595  163365 main.go:141] libmachine: (test-preload-520960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:fe:48", ip: ""} in network mk-test-preload-520960: {Iface:virbr1 ExpiryTime:2024-07-31 21:35:02 +0000 UTC Type:0 Mac:52:54:00:d7:fe:48 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-520960 Clientid:01:52:54:00:d7:fe:48}
	I0731 20:35:11.003625  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined IP address 192.168.39.177 and MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:11.003821  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHPort
	I0731 20:35:11.004035  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHKeyPath
	I0731 20:35:11.004219  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHKeyPath
	I0731 20:35:11.004326  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHUsername
	I0731 20:35:11.004493  163365 main.go:141] libmachine: Using SSH client type: native
	I0731 20:35:11.004689  163365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0731 20:35:11.004711  163365 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-520960' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-520960/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-520960' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:35:11.114283  163365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:35:11.114318  163365 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 20:35:11.114347  163365 buildroot.go:174] setting up certificates
	I0731 20:35:11.114358  163365 provision.go:84] configureAuth start
	I0731 20:35:11.114371  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetMachineName
	I0731 20:35:11.114653  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetIP
	I0731 20:35:11.117380  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:11.117720  163365 main.go:141] libmachine: (test-preload-520960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:fe:48", ip: ""} in network mk-test-preload-520960: {Iface:virbr1 ExpiryTime:2024-07-31 21:35:02 +0000 UTC Type:0 Mac:52:54:00:d7:fe:48 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-520960 Clientid:01:52:54:00:d7:fe:48}
	I0731 20:35:11.117749  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined IP address 192.168.39.177 and MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:11.117848  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHHostname
	I0731 20:35:11.119841  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:11.120170  163365 main.go:141] libmachine: (test-preload-520960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:fe:48", ip: ""} in network mk-test-preload-520960: {Iface:virbr1 ExpiryTime:2024-07-31 21:35:02 +0000 UTC Type:0 Mac:52:54:00:d7:fe:48 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-520960 Clientid:01:52:54:00:d7:fe:48}
	I0731 20:35:11.120201  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined IP address 192.168.39.177 and MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:11.120310  163365 provision.go:143] copyHostCerts
	I0731 20:35:11.120364  163365 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 20:35:11.120375  163365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 20:35:11.120454  163365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 20:35:11.120563  163365 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 20:35:11.120574  163365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 20:35:11.120610  163365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 20:35:11.120697  163365 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 20:35:11.120707  163365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 20:35:11.120738  163365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 20:35:11.120817  163365 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.test-preload-520960 san=[127.0.0.1 192.168.39.177 localhost minikube test-preload-520960]
	I0731 20:35:11.442272  163365 provision.go:177] copyRemoteCerts
	I0731 20:35:11.442333  163365 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:35:11.442363  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHHostname
	I0731 20:35:11.444948  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:11.445252  163365 main.go:141] libmachine: (test-preload-520960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:fe:48", ip: ""} in network mk-test-preload-520960: {Iface:virbr1 ExpiryTime:2024-07-31 21:35:02 +0000 UTC Type:0 Mac:52:54:00:d7:fe:48 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-520960 Clientid:01:52:54:00:d7:fe:48}
	I0731 20:35:11.445278  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined IP address 192.168.39.177 and MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:11.445450  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHPort
	I0731 20:35:11.445593  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHKeyPath
	I0731 20:35:11.445777  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHUsername
	I0731 20:35:11.445914  163365 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/test-preload-520960/id_rsa Username:docker}
	I0731 20:35:11.527946  163365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:35:11.551518  163365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0731 20:35:11.574526  163365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 20:35:11.597441  163365 provision.go:87] duration metric: took 483.068293ms to configureAuth
	I0731 20:35:11.597471  163365 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:35:11.597681  163365 config.go:182] Loaded profile config "test-preload-520960": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0731 20:35:11.597773  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHHostname
	I0731 20:35:11.600444  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:11.600882  163365 main.go:141] libmachine: (test-preload-520960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:fe:48", ip: ""} in network mk-test-preload-520960: {Iface:virbr1 ExpiryTime:2024-07-31 21:35:02 +0000 UTC Type:0 Mac:52:54:00:d7:fe:48 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-520960 Clientid:01:52:54:00:d7:fe:48}
	I0731 20:35:11.600908  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined IP address 192.168.39.177 and MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:11.601118  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHPort
	I0731 20:35:11.601301  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHKeyPath
	I0731 20:35:11.601478  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHKeyPath
	I0731 20:35:11.601616  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHUsername
	I0731 20:35:11.602007  163365 main.go:141] libmachine: Using SSH client type: native
	I0731 20:35:11.602199  163365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0731 20:35:11.602216  163365 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:35:11.859023  163365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:35:11.859049  163365 machine.go:97] duration metric: took 1.085295888s to provisionDockerMachine
	I0731 20:35:11.859062  163365 start.go:293] postStartSetup for "test-preload-520960" (driver="kvm2")
	I0731 20:35:11.859072  163365 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:35:11.859090  163365 main.go:141] libmachine: (test-preload-520960) Calling .DriverName
	I0731 20:35:11.859412  163365 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:35:11.859440  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHHostname
	I0731 20:35:11.861994  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:11.862304  163365 main.go:141] libmachine: (test-preload-520960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:fe:48", ip: ""} in network mk-test-preload-520960: {Iface:virbr1 ExpiryTime:2024-07-31 21:35:02 +0000 UTC Type:0 Mac:52:54:00:d7:fe:48 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-520960 Clientid:01:52:54:00:d7:fe:48}
	I0731 20:35:11.862326  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined IP address 192.168.39.177 and MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:11.862489  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHPort
	I0731 20:35:11.862700  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHKeyPath
	I0731 20:35:11.862846  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHUsername
	I0731 20:35:11.862979  163365 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/test-preload-520960/id_rsa Username:docker}
	I0731 20:35:11.944496  163365 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:35:11.948896  163365 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:35:11.948918  163365 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 20:35:11.948992  163365 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 20:35:11.949065  163365 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 20:35:11.949150  163365 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:35:11.959382  163365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:35:11.983261  163365 start.go:296] duration metric: took 124.182888ms for postStartSetup
	I0731 20:35:11.983311  163365 fix.go:56] duration metric: took 20.725903128s for fixHost
	I0731 20:35:11.983335  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHHostname
	I0731 20:35:11.985782  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:11.986149  163365 main.go:141] libmachine: (test-preload-520960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:fe:48", ip: ""} in network mk-test-preload-520960: {Iface:virbr1 ExpiryTime:2024-07-31 21:35:02 +0000 UTC Type:0 Mac:52:54:00:d7:fe:48 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-520960 Clientid:01:52:54:00:d7:fe:48}
	I0731 20:35:11.986173  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined IP address 192.168.39.177 and MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:11.986341  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHPort
	I0731 20:35:11.986561  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHKeyPath
	I0731 20:35:11.986727  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHKeyPath
	I0731 20:35:11.986877  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHUsername
	I0731 20:35:11.987107  163365 main.go:141] libmachine: Using SSH client type: native
	I0731 20:35:11.987282  163365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0731 20:35:11.987293  163365 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:35:12.090247  163365 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722458112.048331900
	
	I0731 20:35:12.090274  163365 fix.go:216] guest clock: 1722458112.048331900
	I0731 20:35:12.090284  163365 fix.go:229] Guest: 2024-07-31 20:35:12.0483319 +0000 UTC Remote: 2024-07-31 20:35:11.98331652 +0000 UTC m=+35.377867234 (delta=65.01538ms)
	I0731 20:35:12.090319  163365 fix.go:200] guest clock delta is within tolerance: 65.01538ms
	I0731 20:35:12.090327  163365 start.go:83] releasing machines lock for "test-preload-520960", held for 20.83293185s
	I0731 20:35:12.090348  163365 main.go:141] libmachine: (test-preload-520960) Calling .DriverName
	I0731 20:35:12.090607  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetIP
	I0731 20:35:12.093083  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:12.093385  163365 main.go:141] libmachine: (test-preload-520960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:fe:48", ip: ""} in network mk-test-preload-520960: {Iface:virbr1 ExpiryTime:2024-07-31 21:35:02 +0000 UTC Type:0 Mac:52:54:00:d7:fe:48 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-520960 Clientid:01:52:54:00:d7:fe:48}
	I0731 20:35:12.093412  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined IP address 192.168.39.177 and MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:12.093602  163365 main.go:141] libmachine: (test-preload-520960) Calling .DriverName
	I0731 20:35:12.094205  163365 main.go:141] libmachine: (test-preload-520960) Calling .DriverName
	I0731 20:35:12.094376  163365 main.go:141] libmachine: (test-preload-520960) Calling .DriverName
	I0731 20:35:12.094478  163365 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:35:12.094522  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHHostname
	I0731 20:35:12.094630  163365 ssh_runner.go:195] Run: cat /version.json
	I0731 20:35:12.094657  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHHostname
	I0731 20:35:12.097197  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:12.097523  163365 main.go:141] libmachine: (test-preload-520960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:fe:48", ip: ""} in network mk-test-preload-520960: {Iface:virbr1 ExpiryTime:2024-07-31 21:35:02 +0000 UTC Type:0 Mac:52:54:00:d7:fe:48 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-520960 Clientid:01:52:54:00:d7:fe:48}
	I0731 20:35:12.097550  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined IP address 192.168.39.177 and MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:12.097573  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:12.097691  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHPort
	I0731 20:35:12.097872  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHKeyPath
	I0731 20:35:12.098142  163365 main.go:141] libmachine: (test-preload-520960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:fe:48", ip: ""} in network mk-test-preload-520960: {Iface:virbr1 ExpiryTime:2024-07-31 21:35:02 +0000 UTC Type:0 Mac:52:54:00:d7:fe:48 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-520960 Clientid:01:52:54:00:d7:fe:48}
	I0731 20:35:12.098157  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHUsername
	I0731 20:35:12.098170  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined IP address 192.168.39.177 and MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:12.098223  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHPort
	I0731 20:35:12.098343  163365 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/test-preload-520960/id_rsa Username:docker}
	I0731 20:35:12.098390  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHKeyPath
	I0731 20:35:12.098558  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHUsername
	I0731 20:35:12.098692  163365 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/test-preload-520960/id_rsa Username:docker}
	I0731 20:35:12.174585  163365 ssh_runner.go:195] Run: systemctl --version
	I0731 20:35:12.199028  163365 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:35:12.341137  163365 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:35:12.346947  163365 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:35:12.347014  163365 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:35:12.363419  163365 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:35:12.363440  163365 start.go:495] detecting cgroup driver to use...
	I0731 20:35:12.363506  163365 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:35:12.380422  163365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:35:12.394058  163365 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:35:12.394120  163365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:35:12.407895  163365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:35:12.421431  163365 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:35:12.532444  163365 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:35:12.671306  163365 docker.go:233] disabling docker service ...
	I0731 20:35:12.671374  163365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:35:12.686189  163365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:35:12.699195  163365 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:35:12.849526  163365 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:35:12.981123  163365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:35:12.994960  163365 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:35:13.013613  163365 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0731 20:35:13.013672  163365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:35:13.024564  163365 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:35:13.024630  163365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:35:13.035882  163365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:35:13.046518  163365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:35:13.056909  163365 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:35:13.067694  163365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:35:13.078139  163365 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:35:13.094684  163365 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:35:13.105522  163365 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:35:13.115258  163365 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:35:13.115313  163365 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:35:13.129150  163365 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:35:13.138593  163365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:35:13.265390  163365 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:35:13.406563  163365 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:35:13.406646  163365 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:35:13.411408  163365 start.go:563] Will wait 60s for crictl version
	I0731 20:35:13.411453  163365 ssh_runner.go:195] Run: which crictl
	I0731 20:35:13.415129  163365 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:35:13.453867  163365 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:35:13.453932  163365 ssh_runner.go:195] Run: crio --version
	I0731 20:35:13.484223  163365 ssh_runner.go:195] Run: crio --version
	I0731 20:35:13.512652  163365 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0731 20:35:13.513930  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetIP
	I0731 20:35:13.516689  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:13.517040  163365 main.go:141] libmachine: (test-preload-520960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:fe:48", ip: ""} in network mk-test-preload-520960: {Iface:virbr1 ExpiryTime:2024-07-31 21:35:02 +0000 UTC Type:0 Mac:52:54:00:d7:fe:48 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-520960 Clientid:01:52:54:00:d7:fe:48}
	I0731 20:35:13.517063  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined IP address 192.168.39.177 and MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:13.517271  163365 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 20:35:13.521358  163365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:35:13.533532  163365 kubeadm.go:883] updating cluster {Name:test-preload-520960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-520960 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.177 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:35:13.533639  163365 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0731 20:35:13.533680  163365 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:35:13.569478  163365 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0731 20:35:13.569542  163365 ssh_runner.go:195] Run: which lz4
	I0731 20:35:13.573418  163365 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 20:35:13.577579  163365 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 20:35:13.577606  163365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0731 20:35:15.097646  163365 crio.go:462] duration metric: took 1.524254964s to copy over tarball
	I0731 20:35:15.097715  163365 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 20:35:17.451740  163365 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.353991789s)
	I0731 20:35:17.451776  163365 crio.go:469] duration metric: took 2.354100174s to extract the tarball
	I0731 20:35:17.451786  163365 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 20:35:17.493119  163365 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:35:17.537055  163365 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0731 20:35:17.537085  163365 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 20:35:17.537150  163365 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:35:17.537172  163365 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0731 20:35:17.537207  163365 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0731 20:35:17.537225  163365 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 20:35:17.537259  163365 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0731 20:35:17.537280  163365 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 20:35:17.537294  163365 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0731 20:35:17.537362  163365 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0731 20:35:17.538870  163365 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0731 20:35:17.538878  163365 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0731 20:35:17.538870  163365 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0731 20:35:17.538878  163365 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 20:35:17.538880  163365 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0731 20:35:17.538882  163365 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:35:17.538885  163365 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0731 20:35:17.538904  163365 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 20:35:17.749684  163365 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0731 20:35:17.788989  163365 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0731 20:35:17.789026  163365 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0731 20:35:17.789066  163365 ssh_runner.go:195] Run: which crictl
	I0731 20:35:17.792806  163365 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0731 20:35:17.824703  163365 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0731 20:35:17.824797  163365 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0731 20:35:17.829443  163365 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0731 20:35:17.829466  163365 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0731 20:35:17.829518  163365 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0731 20:35:17.873327  163365 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0731 20:35:17.875615  163365 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0731 20:35:17.898341  163365 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0731 20:35:17.918372  163365 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 20:35:17.943902  163365 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0731 20:35:17.962466  163365 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0731 20:35:18.389783  163365 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:35:21.290378  163365 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (3.460834974s)
	I0731 20:35:21.290414  163365 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0731 20:35:21.290475  163365 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4: (3.417091518s)
	I0731 20:35:21.290526  163365 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0731 20:35:21.290555  163365 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0731 20:35:21.290574  163365 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6: (3.414932249s)
	I0731 20:35:21.290603  163365 ssh_runner.go:195] Run: which crictl
	I0731 20:35:21.290607  163365 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0731 20:35:21.290624  163365 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7: (3.392253243s)
	I0731 20:35:21.290660  163365 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0731 20:35:21.290677  163365 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4: (3.372278334s)
	I0731 20:35:21.290722  163365 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0731 20:35:21.290726  163365 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4: (3.346801266s)
	I0731 20:35:21.290744  163365 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 20:35:21.290754  163365 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0731 20:35:21.290770  163365 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0731 20:35:21.290779  163365 ssh_runner.go:195] Run: which crictl
	I0731 20:35:21.290798  163365 ssh_runner.go:195] Run: which crictl
	I0731 20:35:21.290809  163365 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.901001658s)
	I0731 20:35:21.290772  163365 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4: (3.32827909s)
	I0731 20:35:21.290858  163365 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0731 20:35:21.290683  163365 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0731 20:35:21.290891  163365 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0731 20:35:21.290936  163365 ssh_runner.go:195] Run: which crictl
	I0731 20:35:21.290632  163365 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 20:35:21.290961  163365 ssh_runner.go:195] Run: which crictl
	I0731 20:35:21.290939  163365 ssh_runner.go:195] Run: which crictl
	I0731 20:35:21.295592  163365 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0731 20:35:21.304246  163365 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0731 20:35:21.304332  163365 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0731 20:35:21.304387  163365 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 20:35:21.304406  163365 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0731 20:35:21.304432  163365 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0731 20:35:21.423857  163365 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0731 20:35:21.423884  163365 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0731 20:35:21.423966  163365 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0731 20:35:21.423983  163365 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0731 20:35:21.423993  163365 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0731 20:35:21.423967  163365 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0731 20:35:21.424044  163365 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0731 20:35:21.424060  163365 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0731 20:35:21.441854  163365 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0731 20:35:21.441973  163365 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0731 20:35:21.444123  163365 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0731 20:35:21.444182  163365 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0731 20:35:21.444200  163365 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0731 20:35:21.444223  163365 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0731 20:35:21.444239  163365 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0731 20:35:21.444254  163365 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0731 20:35:21.444305  163365 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0731 20:35:21.447995  163365 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0731 20:35:21.448212  163365 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0731 20:35:22.188106  163365 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0731 20:35:22.188137  163365 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0731 20:35:22.188178  163365 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0731 20:35:22.188235  163365 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0731 20:35:22.535282  163365 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0731 20:35:22.535339  163365 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0731 20:35:22.535408  163365 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0731 20:35:22.685404  163365 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0731 20:35:22.685455  163365 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0731 20:35:22.685518  163365 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0731 20:35:23.536382  163365 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0731 20:35:23.536435  163365 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0731 20:35:23.536496  163365 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0731 20:35:24.280228  163365 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0731 20:35:24.280296  163365 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0731 20:35:24.280358  163365 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0731 20:35:24.722754  163365 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0731 20:35:24.722811  163365 cache_images.go:123] Successfully loaded all cached images
	I0731 20:35:24.722818  163365 cache_images.go:92] duration metric: took 7.185719514s to LoadCachedImages
	I0731 20:35:24.722849  163365 kubeadm.go:934] updating node { 192.168.39.177 8443 v1.24.4 crio true true} ...
	I0731 20:35:24.722979  163365 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-520960 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.177
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-520960 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:35:24.723067  163365 ssh_runner.go:195] Run: crio config
	I0731 20:35:24.769348  163365 cni.go:84] Creating CNI manager for ""
	I0731 20:35:24.769371  163365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:35:24.769383  163365 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:35:24.769402  163365 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.177 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-520960 NodeName:test-preload-520960 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.177"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.177 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 20:35:24.769540  163365 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.177
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-520960"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.177
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.177"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:35:24.769617  163365 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0731 20:35:24.780228  163365 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:35:24.780287  163365 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 20:35:24.790083  163365 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0731 20:35:24.806523  163365 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:35:24.822671  163365 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0731 20:35:24.839050  163365 ssh_runner.go:195] Run: grep 192.168.39.177	control-plane.minikube.internal$ /etc/hosts
	I0731 20:35:24.842708  163365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.177	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:35:24.854842  163365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:35:24.966080  163365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:35:24.985987  163365 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/test-preload-520960 for IP: 192.168.39.177
	I0731 20:35:24.986009  163365 certs.go:194] generating shared ca certs ...
	I0731 20:35:24.986028  163365 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:35:24.986196  163365 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 20:35:24.986281  163365 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 20:35:24.986297  163365 certs.go:256] generating profile certs ...
	I0731 20:35:24.986440  163365 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/test-preload-520960/client.key
	I0731 20:35:24.986525  163365 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/test-preload-520960/apiserver.key.db13169c
	I0731 20:35:24.986576  163365 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/test-preload-520960/proxy-client.key
	I0731 20:35:24.986752  163365 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 20:35:24.986812  163365 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 20:35:24.986825  163365 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 20:35:24.986880  163365 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:35:24.986911  163365 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:35:24.986931  163365 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 20:35:24.986973  163365 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:35:24.987720  163365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:35:25.039710  163365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:35:25.076514  163365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:35:25.105793  163365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 20:35:25.138170  163365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/test-preload-520960/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0731 20:35:25.170235  163365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/test-preload-520960/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 20:35:25.197759  163365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/test-preload-520960/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:35:25.220507  163365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/test-preload-520960/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 20:35:25.243372  163365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:35:25.265951  163365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 20:35:25.288331  163365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 20:35:25.310715  163365 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:35:25.326997  163365 ssh_runner.go:195] Run: openssl version
	I0731 20:35:25.332834  163365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:35:25.344921  163365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:35:25.349492  163365 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:35:25.349553  163365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:35:25.355386  163365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:35:25.366121  163365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 20:35:25.376512  163365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 20:35:25.380696  163365 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 20:35:25.380741  163365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 20:35:25.386262  163365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 20:35:25.396875  163365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 20:35:25.407963  163365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 20:35:25.412146  163365 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 20:35:25.412198  163365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 20:35:25.417741  163365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:35:25.428215  163365 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:35:25.432332  163365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 20:35:25.437922  163365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 20:35:25.443396  163365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 20:35:25.449091  163365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 20:35:25.454619  163365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 20:35:25.460160  163365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 20:35:25.465638  163365 kubeadm.go:392] StartCluster: {Name:test-preload-520960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-520960 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.177 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:35:25.465740  163365 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:35:25.465784  163365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:35:25.502345  163365 cri.go:89] found id: ""
	I0731 20:35:25.502461  163365 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 20:35:25.512832  163365 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 20:35:25.512852  163365 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 20:35:25.512896  163365 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 20:35:25.522786  163365 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:35:25.523199  163365 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-520960" does not appear in /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 20:35:25.523302  163365 kubeconfig.go:62] /home/jenkins/minikube-integration/19355-121704/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-520960" cluster setting kubeconfig missing "test-preload-520960" context setting]
	I0731 20:35:25.523652  163365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:35:25.524246  163365 kapi.go:59] client config for test-preload-520960: &rest.Config{Host:"https://192.168.39.177:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19355-121704/.minikube/profiles/test-preload-520960/client.crt", KeyFile:"/home/jenkins/minikube-integration/19355-121704/.minikube/profiles/test-preload-520960/client.key", CAFile:"/home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 20:35:25.524986  163365 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 20:35:25.534567  163365 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.177
	I0731 20:35:25.534599  163365 kubeadm.go:1160] stopping kube-system containers ...
	I0731 20:35:25.534613  163365 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 20:35:25.534656  163365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:35:25.569609  163365 cri.go:89] found id: ""
	I0731 20:35:25.569681  163365 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 20:35:25.586318  163365 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:35:25.596621  163365 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:35:25.596640  163365 kubeadm.go:157] found existing configuration files:
	
	I0731 20:35:25.596680  163365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 20:35:25.605927  163365 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:35:25.605970  163365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:35:25.615319  163365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 20:35:25.624133  163365 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:35:25.624184  163365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:35:25.633351  163365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 20:35:25.642061  163365 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:35:25.642116  163365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:35:25.651092  163365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 20:35:25.659917  163365 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:35:25.659966  163365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:35:25.669126  163365 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 20:35:25.678334  163365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:35:25.770097  163365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:35:26.562124  163365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:35:26.807701  163365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:35:26.879453  163365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:35:26.999428  163365 api_server.go:52] waiting for apiserver process to appear ...
	I0731 20:35:26.999540  163365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:35:27.500085  163365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:35:27.999624  163365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:35:28.017444  163365 api_server.go:72] duration metric: took 1.018018121s to wait for apiserver process to appear ...
	I0731 20:35:28.017470  163365 api_server.go:88] waiting for apiserver healthz status ...
	I0731 20:35:28.017490  163365 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8443/healthz ...
	I0731 20:35:28.018007  163365 api_server.go:269] stopped: https://192.168.39.177:8443/healthz: Get "https://192.168.39.177:8443/healthz": dial tcp 192.168.39.177:8443: connect: connection refused
	I0731 20:35:28.518489  163365 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8443/healthz ...
	I0731 20:35:31.486971  163365 api_server.go:279] https://192.168.39.177:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:35:31.487023  163365 api_server.go:103] status: https://192.168.39.177:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:35:31.487042  163365 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8443/healthz ...
	I0731 20:35:31.519850  163365 api_server.go:279] https://192.168.39.177:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:35:31.519882  163365 api_server.go:103] status: https://192.168.39.177:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:35:31.519896  163365 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8443/healthz ...
	I0731 20:35:31.549234  163365 api_server.go:279] https://192.168.39.177:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:35:31.549262  163365 api_server.go:103] status: https://192.168.39.177:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:35:32.018036  163365 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8443/healthz ...
	I0731 20:35:32.022954  163365 api_server.go:279] https://192.168.39.177:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:35:32.022982  163365 api_server.go:103] status: https://192.168.39.177:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:35:32.517543  163365 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8443/healthz ...
	I0731 20:35:32.537398  163365 api_server.go:279] https://192.168.39.177:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:35:32.537434  163365 api_server.go:103] status: https://192.168.39.177:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:35:33.017940  163365 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8443/healthz ...
	I0731 20:35:33.025047  163365 api_server.go:279] https://192.168.39.177:8443/healthz returned 200:
	ok
	I0731 20:35:33.036272  163365 api_server.go:141] control plane version: v1.24.4
	I0731 20:35:33.036309  163365 api_server.go:131] duration metric: took 5.018832007s to wait for apiserver health ...
	I0731 20:35:33.036318  163365 cni.go:84] Creating CNI manager for ""
	I0731 20:35:33.036325  163365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:35:33.038491  163365 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 20:35:33.039957  163365 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 20:35:33.063808  163365 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 20:35:33.103684  163365 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 20:35:33.126485  163365 system_pods.go:59] 8 kube-system pods found
	I0731 20:35:33.126530  163365 system_pods.go:61] "coredns-6d4b75cb6d-tj6pg" [5efe8c81-0ebe-437a-a596-d3ccd4c4c890] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 20:35:33.126538  163365 system_pods.go:61] "coredns-6d4b75cb6d-wg4fb" [be15ce55-add2-4acc-873f-3c7a71eac2ba] Running
	I0731 20:35:33.126546  163365 system_pods.go:61] "etcd-test-preload-520960" [a4b247f0-bb92-48a6-9be3-530da29677e1] Running
	I0731 20:35:33.126557  163365 system_pods.go:61] "kube-apiserver-test-preload-520960" [38a0ed71-314c-4995-8c1b-b69c0f019651] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 20:35:33.126567  163365 system_pods.go:61] "kube-controller-manager-test-preload-520960" [2b544cae-2e15-4f40-9299-5b3e6e1c047e] Running
	I0731 20:35:33.126577  163365 system_pods.go:61] "kube-proxy-8m2vz" [c82419da-6568-450d-8548-f87cc99b66b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 20:35:33.126584  163365 system_pods.go:61] "kube-scheduler-test-preload-520960" [18f2b6b7-7243-4483-87e8-d370a858a876] Running
	I0731 20:35:33.126594  163365 system_pods.go:61] "storage-provisioner" [7f7257d3-5304-463e-8691-4cbb1bcfea10] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 20:35:33.126610  163365 system_pods.go:74] duration metric: took 22.902374ms to wait for pod list to return data ...
	I0731 20:35:33.126624  163365 node_conditions.go:102] verifying NodePressure condition ...
	I0731 20:35:33.132202  163365 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 20:35:33.132236  163365 node_conditions.go:123] node cpu capacity is 2
	I0731 20:35:33.132250  163365 node_conditions.go:105] duration metric: took 5.618964ms to run NodePressure ...
	I0731 20:35:33.132271  163365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:35:33.352148  163365 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 20:35:33.357104  163365 kubeadm.go:739] kubelet initialised
	I0731 20:35:33.357132  163365 kubeadm.go:740] duration metric: took 4.954587ms waiting for restarted kubelet to initialise ...
	I0731 20:35:33.357142  163365 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:35:33.365523  163365 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-tj6pg" in "kube-system" namespace to be "Ready" ...
	I0731 20:35:33.370320  163365 pod_ready.go:97] node "test-preload-520960" hosting pod "coredns-6d4b75cb6d-tj6pg" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-520960" has status "Ready":"False"
	I0731 20:35:33.370347  163365 pod_ready.go:81] duration metric: took 4.801706ms for pod "coredns-6d4b75cb6d-tj6pg" in "kube-system" namespace to be "Ready" ...
	E0731 20:35:33.370356  163365 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-520960" hosting pod "coredns-6d4b75cb6d-tj6pg" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-520960" has status "Ready":"False"
	I0731 20:35:33.370364  163365 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-wg4fb" in "kube-system" namespace to be "Ready" ...
	I0731 20:35:33.375451  163365 pod_ready.go:97] node "test-preload-520960" hosting pod "coredns-6d4b75cb6d-wg4fb" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-520960" has status "Ready":"False"
	I0731 20:35:33.375476  163365 pod_ready.go:81] duration metric: took 5.100508ms for pod "coredns-6d4b75cb6d-wg4fb" in "kube-system" namespace to be "Ready" ...
	E0731 20:35:33.375485  163365 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-520960" hosting pod "coredns-6d4b75cb6d-wg4fb" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-520960" has status "Ready":"False"
	I0731 20:35:33.375491  163365 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-520960" in "kube-system" namespace to be "Ready" ...
	I0731 20:35:33.381065  163365 pod_ready.go:97] node "test-preload-520960" hosting pod "etcd-test-preload-520960" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-520960" has status "Ready":"False"
	I0731 20:35:33.381084  163365 pod_ready.go:81] duration metric: took 5.584824ms for pod "etcd-test-preload-520960" in "kube-system" namespace to be "Ready" ...
	E0731 20:35:33.381091  163365 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-520960" hosting pod "etcd-test-preload-520960" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-520960" has status "Ready":"False"
	I0731 20:35:33.381097  163365 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-520960" in "kube-system" namespace to be "Ready" ...
	I0731 20:35:33.508087  163365 pod_ready.go:97] node "test-preload-520960" hosting pod "kube-apiserver-test-preload-520960" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-520960" has status "Ready":"False"
	I0731 20:35:33.508118  163365 pod_ready.go:81] duration metric: took 127.01225ms for pod "kube-apiserver-test-preload-520960" in "kube-system" namespace to be "Ready" ...
	E0731 20:35:33.508132  163365 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-520960" hosting pod "kube-apiserver-test-preload-520960" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-520960" has status "Ready":"False"
	I0731 20:35:33.508141  163365 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-520960" in "kube-system" namespace to be "Ready" ...
	I0731 20:35:33.907245  163365 pod_ready.go:97] node "test-preload-520960" hosting pod "kube-controller-manager-test-preload-520960" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-520960" has status "Ready":"False"
	I0731 20:35:33.907280  163365 pod_ready.go:81] duration metric: took 399.126793ms for pod "kube-controller-manager-test-preload-520960" in "kube-system" namespace to be "Ready" ...
	E0731 20:35:33.907290  163365 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-520960" hosting pod "kube-controller-manager-test-preload-520960" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-520960" has status "Ready":"False"
	I0731 20:35:33.907296  163365 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8m2vz" in "kube-system" namespace to be "Ready" ...
	I0731 20:35:34.308848  163365 pod_ready.go:97] node "test-preload-520960" hosting pod "kube-proxy-8m2vz" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-520960" has status "Ready":"False"
	I0731 20:35:34.308881  163365 pod_ready.go:81] duration metric: took 401.574956ms for pod "kube-proxy-8m2vz" in "kube-system" namespace to be "Ready" ...
	E0731 20:35:34.308893  163365 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-520960" hosting pod "kube-proxy-8m2vz" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-520960" has status "Ready":"False"
	I0731 20:35:34.308902  163365 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-520960" in "kube-system" namespace to be "Ready" ...
	I0731 20:35:34.707367  163365 pod_ready.go:97] node "test-preload-520960" hosting pod "kube-scheduler-test-preload-520960" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-520960" has status "Ready":"False"
	I0731 20:35:34.707393  163365 pod_ready.go:81] duration metric: took 398.484222ms for pod "kube-scheduler-test-preload-520960" in "kube-system" namespace to be "Ready" ...
	E0731 20:35:34.707403  163365 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-520960" hosting pod "kube-scheduler-test-preload-520960" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-520960" has status "Ready":"False"
	I0731 20:35:34.707416  163365 pod_ready.go:38] duration metric: took 1.350262815s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:35:34.707437  163365 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 20:35:34.722079  163365 ops.go:34] apiserver oom_adj: -16
	I0731 20:35:34.722101  163365 kubeadm.go:597] duration metric: took 9.209242097s to restartPrimaryControlPlane
	I0731 20:35:34.722123  163365 kubeadm.go:394] duration metric: took 9.256479092s to StartCluster
	I0731 20:35:34.722145  163365 settings.go:142] acquiring lock: {Name:mk1f43113b3e416d694056e4890ac819b020378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:35:34.722227  163365 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 20:35:34.722912  163365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:35:34.723178  163365 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.177 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 20:35:34.723289  163365 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 20:35:34.723353  163365 addons.go:69] Setting storage-provisioner=true in profile "test-preload-520960"
	I0731 20:35:34.723378  163365 addons.go:69] Setting default-storageclass=true in profile "test-preload-520960"
	I0731 20:35:34.723418  163365 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-520960"
	I0731 20:35:34.723383  163365 addons.go:234] Setting addon storage-provisioner=true in "test-preload-520960"
	W0731 20:35:34.723520  163365 addons.go:243] addon storage-provisioner should already be in state true
	I0731 20:35:34.723381  163365 config.go:182] Loaded profile config "test-preload-520960": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0731 20:35:34.723572  163365 host.go:66] Checking if "test-preload-520960" exists ...
	I0731 20:35:34.723768  163365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:35:34.723811  163365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:35:34.723951  163365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:35:34.723989  163365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:35:34.724904  163365 out.go:177] * Verifying Kubernetes components...
	I0731 20:35:34.726233  163365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:35:34.739659  163365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45531
	I0731 20:35:34.740157  163365 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:35:34.740330  163365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46225
	I0731 20:35:34.740675  163365 main.go:141] libmachine: Using API Version  1
	I0731 20:35:34.740698  163365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:35:34.740800  163365 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:35:34.741073  163365 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:35:34.741298  163365 main.go:141] libmachine: Using API Version  1
	I0731 20:35:34.741316  163365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:35:34.741355  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetState
	I0731 20:35:34.741675  163365 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:35:34.742157  163365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:35:34.742197  163365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:35:34.743812  163365 kapi.go:59] client config for test-preload-520960: &rest.Config{Host:"https://192.168.39.177:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19355-121704/.minikube/profiles/test-preload-520960/client.crt", KeyFile:"/home/jenkins/minikube-integration/19355-121704/.minikube/profiles/test-preload-520960/client.key", CAFile:"/home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 20:35:34.744061  163365 addons.go:234] Setting addon default-storageclass=true in "test-preload-520960"
	W0731 20:35:34.744076  163365 addons.go:243] addon default-storageclass should already be in state true
	I0731 20:35:34.744101  163365 host.go:66] Checking if "test-preload-520960" exists ...
	I0731 20:35:34.744326  163365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:35:34.744356  163365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:35:34.757899  163365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39137
	I0731 20:35:34.758487  163365 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:35:34.758813  163365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40511
	I0731 20:35:34.759040  163365 main.go:141] libmachine: Using API Version  1
	I0731 20:35:34.759063  163365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:35:34.759215  163365 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:35:34.759607  163365 main.go:141] libmachine: Using API Version  1
	I0731 20:35:34.759619  163365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:35:34.759652  163365 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:35:34.759841  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetState
	I0731 20:35:34.759956  163365 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:35:34.760541  163365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:35:34.760586  163365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:35:34.761580  163365 main.go:141] libmachine: (test-preload-520960) Calling .DriverName
	I0731 20:35:34.763480  163365 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:35:34.764951  163365 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 20:35:34.764972  163365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 20:35:34.764990  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHHostname
	I0731 20:35:34.767608  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:34.768081  163365 main.go:141] libmachine: (test-preload-520960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:fe:48", ip: ""} in network mk-test-preload-520960: {Iface:virbr1 ExpiryTime:2024-07-31 21:35:02 +0000 UTC Type:0 Mac:52:54:00:d7:fe:48 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-520960 Clientid:01:52:54:00:d7:fe:48}
	I0731 20:35:34.768111  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined IP address 192.168.39.177 and MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:34.768277  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHPort
	I0731 20:35:34.768446  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHKeyPath
	I0731 20:35:34.768589  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHUsername
	I0731 20:35:34.768746  163365 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/test-preload-520960/id_rsa Username:docker}
	I0731 20:35:34.778610  163365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34179
	I0731 20:35:34.779056  163365 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:35:34.779524  163365 main.go:141] libmachine: Using API Version  1
	I0731 20:35:34.779551  163365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:35:34.779919  163365 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:35:34.780098  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetState
	I0731 20:35:34.781633  163365 main.go:141] libmachine: (test-preload-520960) Calling .DriverName
	I0731 20:35:34.781861  163365 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 20:35:34.781888  163365 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 20:35:34.781904  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHHostname
	I0731 20:35:34.784381  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:34.784776  163365 main.go:141] libmachine: (test-preload-520960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:fe:48", ip: ""} in network mk-test-preload-520960: {Iface:virbr1 ExpiryTime:2024-07-31 21:35:02 +0000 UTC Type:0 Mac:52:54:00:d7:fe:48 Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:test-preload-520960 Clientid:01:52:54:00:d7:fe:48}
	I0731 20:35:34.784797  163365 main.go:141] libmachine: (test-preload-520960) DBG | domain test-preload-520960 has defined IP address 192.168.39.177 and MAC address 52:54:00:d7:fe:48 in network mk-test-preload-520960
	I0731 20:35:34.784970  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHPort
	I0731 20:35:34.785138  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHKeyPath
	I0731 20:35:34.785299  163365 main.go:141] libmachine: (test-preload-520960) Calling .GetSSHUsername
	I0731 20:35:34.785501  163365 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/test-preload-520960/id_rsa Username:docker}
	I0731 20:35:34.903806  163365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:35:34.919840  163365 node_ready.go:35] waiting up to 6m0s for node "test-preload-520960" to be "Ready" ...
	I0731 20:35:34.984185  163365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 20:35:35.090419  163365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 20:35:35.937283  163365 main.go:141] libmachine: Making call to close driver server
	I0731 20:35:35.937328  163365 main.go:141] libmachine: (test-preload-520960) Calling .Close
	I0731 20:35:35.937680  163365 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:35:35.937702  163365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:35:35.937716  163365 main.go:141] libmachine: Making call to close driver server
	I0731 20:35:35.937719  163365 main.go:141] libmachine: (test-preload-520960) DBG | Closing plugin on server side
	I0731 20:35:35.937725  163365 main.go:141] libmachine: (test-preload-520960) Calling .Close
	I0731 20:35:35.938017  163365 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:35:35.938023  163365 main.go:141] libmachine: (test-preload-520960) DBG | Closing plugin on server side
	I0731 20:35:35.938032  163365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:35:35.951952  163365 main.go:141] libmachine: Making call to close driver server
	I0731 20:35:35.951977  163365 main.go:141] libmachine: (test-preload-520960) Calling .Close
	I0731 20:35:35.952283  163365 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:35:35.952301  163365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:35:35.970227  163365 main.go:141] libmachine: Making call to close driver server
	I0731 20:35:35.970255  163365 main.go:141] libmachine: (test-preload-520960) Calling .Close
	I0731 20:35:35.970566  163365 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:35:35.970587  163365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:35:35.970596  163365 main.go:141] libmachine: Making call to close driver server
	I0731 20:35:35.970602  163365 main.go:141] libmachine: (test-preload-520960) Calling .Close
	I0731 20:35:35.970599  163365 main.go:141] libmachine: (test-preload-520960) DBG | Closing plugin on server side
	I0731 20:35:35.970829  163365 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:35:35.970868  163365 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:35:35.972815  163365 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0731 20:35:35.974018  163365 addons.go:510] duration metric: took 1.250746749s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0731 20:35:36.923265  163365 node_ready.go:53] node "test-preload-520960" has status "Ready":"False"
	I0731 20:35:38.924450  163365 node_ready.go:53] node "test-preload-520960" has status "Ready":"False"
	I0731 20:35:40.926249  163365 node_ready.go:53] node "test-preload-520960" has status "Ready":"False"
	I0731 20:35:41.924470  163365 node_ready.go:49] node "test-preload-520960" has status "Ready":"True"
	I0731 20:35:41.924495  163365 node_ready.go:38] duration metric: took 7.004622799s for node "test-preload-520960" to be "Ready" ...
	I0731 20:35:41.924504  163365 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:35:41.930477  163365 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-tj6pg" in "kube-system" namespace to be "Ready" ...
	I0731 20:35:41.936057  163365 pod_ready.go:92] pod "coredns-6d4b75cb6d-tj6pg" in "kube-system" namespace has status "Ready":"True"
	I0731 20:35:41.936082  163365 pod_ready.go:81] duration metric: took 5.584387ms for pod "coredns-6d4b75cb6d-tj6pg" in "kube-system" namespace to be "Ready" ...
	I0731 20:35:41.936093  163365 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-520960" in "kube-system" namespace to be "Ready" ...
	I0731 20:35:41.944141  163365 pod_ready.go:92] pod "etcd-test-preload-520960" in "kube-system" namespace has status "Ready":"True"
	I0731 20:35:41.944160  163365 pod_ready.go:81] duration metric: took 8.060183ms for pod "etcd-test-preload-520960" in "kube-system" namespace to be "Ready" ...
	I0731 20:35:41.944167  163365 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-520960" in "kube-system" namespace to be "Ready" ...
	I0731 20:35:43.950220  163365 pod_ready.go:102] pod "kube-apiserver-test-preload-520960" in "kube-system" namespace has status "Ready":"False"
	I0731 20:35:45.952634  163365 pod_ready.go:102] pod "kube-apiserver-test-preload-520960" in "kube-system" namespace has status "Ready":"False"
	I0731 20:35:46.950881  163365 pod_ready.go:92] pod "kube-apiserver-test-preload-520960" in "kube-system" namespace has status "Ready":"True"
	I0731 20:35:46.950904  163365 pod_ready.go:81] duration metric: took 5.006731278s for pod "kube-apiserver-test-preload-520960" in "kube-system" namespace to be "Ready" ...
	I0731 20:35:46.950913  163365 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-520960" in "kube-system" namespace to be "Ready" ...
	I0731 20:35:46.955411  163365 pod_ready.go:92] pod "kube-controller-manager-test-preload-520960" in "kube-system" namespace has status "Ready":"True"
	I0731 20:35:46.955436  163365 pod_ready.go:81] duration metric: took 4.512935ms for pod "kube-controller-manager-test-preload-520960" in "kube-system" namespace to be "Ready" ...
	I0731 20:35:46.955445  163365 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8m2vz" in "kube-system" namespace to be "Ready" ...
	I0731 20:35:46.959933  163365 pod_ready.go:92] pod "kube-proxy-8m2vz" in "kube-system" namespace has status "Ready":"True"
	I0731 20:35:46.959959  163365 pod_ready.go:81] duration metric: took 4.5028ms for pod "kube-proxy-8m2vz" in "kube-system" namespace to be "Ready" ...
	I0731 20:35:46.959967  163365 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-520960" in "kube-system" namespace to be "Ready" ...
	I0731 20:35:46.963980  163365 pod_ready.go:92] pod "kube-scheduler-test-preload-520960" in "kube-system" namespace has status "Ready":"True"
	I0731 20:35:46.963999  163365 pod_ready.go:81] duration metric: took 4.026329ms for pod "kube-scheduler-test-preload-520960" in "kube-system" namespace to be "Ready" ...
	I0731 20:35:46.964008  163365 pod_ready.go:38] duration metric: took 5.039495366s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:35:46.964029  163365 api_server.go:52] waiting for apiserver process to appear ...
	I0731 20:35:46.964085  163365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:35:46.985601  163365 api_server.go:72] duration metric: took 12.262381353s to wait for apiserver process to appear ...
	I0731 20:35:46.985634  163365 api_server.go:88] waiting for apiserver healthz status ...
	I0731 20:35:46.985656  163365 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8443/healthz ...
	I0731 20:35:46.990781  163365 api_server.go:279] https://192.168.39.177:8443/healthz returned 200:
	ok
	I0731 20:35:46.991864  163365 api_server.go:141] control plane version: v1.24.4
	I0731 20:35:46.991892  163365 api_server.go:131] duration metric: took 6.250151ms to wait for apiserver health ...
	I0731 20:35:46.991903  163365 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 20:35:46.996725  163365 system_pods.go:59] 7 kube-system pods found
	I0731 20:35:46.996751  163365 system_pods.go:61] "coredns-6d4b75cb6d-tj6pg" [5efe8c81-0ebe-437a-a596-d3ccd4c4c890] Running
	I0731 20:35:46.996755  163365 system_pods.go:61] "etcd-test-preload-520960" [a4b247f0-bb92-48a6-9be3-530da29677e1] Running
	I0731 20:35:46.996759  163365 system_pods.go:61] "kube-apiserver-test-preload-520960" [38a0ed71-314c-4995-8c1b-b69c0f019651] Running
	I0731 20:35:46.996763  163365 system_pods.go:61] "kube-controller-manager-test-preload-520960" [2b544cae-2e15-4f40-9299-5b3e6e1c047e] Running
	I0731 20:35:46.996765  163365 system_pods.go:61] "kube-proxy-8m2vz" [c82419da-6568-450d-8548-f87cc99b66b8] Running
	I0731 20:35:46.996768  163365 system_pods.go:61] "kube-scheduler-test-preload-520960" [18f2b6b7-7243-4483-87e8-d370a858a876] Running
	I0731 20:35:46.996771  163365 system_pods.go:61] "storage-provisioner" [7f7257d3-5304-463e-8691-4cbb1bcfea10] Running
	I0731 20:35:46.996775  163365 system_pods.go:74] duration metric: took 4.867157ms to wait for pod list to return data ...
	I0731 20:35:46.996783  163365 default_sa.go:34] waiting for default service account to be created ...
	I0731 20:35:47.124604  163365 default_sa.go:45] found service account: "default"
	I0731 20:35:47.124630  163365 default_sa.go:55] duration metric: took 127.841054ms for default service account to be created ...
	I0731 20:35:47.124639  163365 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 20:35:47.326303  163365 system_pods.go:86] 7 kube-system pods found
	I0731 20:35:47.326333  163365 system_pods.go:89] "coredns-6d4b75cb6d-tj6pg" [5efe8c81-0ebe-437a-a596-d3ccd4c4c890] Running
	I0731 20:35:47.326339  163365 system_pods.go:89] "etcd-test-preload-520960" [a4b247f0-bb92-48a6-9be3-530da29677e1] Running
	I0731 20:35:47.326343  163365 system_pods.go:89] "kube-apiserver-test-preload-520960" [38a0ed71-314c-4995-8c1b-b69c0f019651] Running
	I0731 20:35:47.326347  163365 system_pods.go:89] "kube-controller-manager-test-preload-520960" [2b544cae-2e15-4f40-9299-5b3e6e1c047e] Running
	I0731 20:35:47.326350  163365 system_pods.go:89] "kube-proxy-8m2vz" [c82419da-6568-450d-8548-f87cc99b66b8] Running
	I0731 20:35:47.326354  163365 system_pods.go:89] "kube-scheduler-test-preload-520960" [18f2b6b7-7243-4483-87e8-d370a858a876] Running
	I0731 20:35:47.326358  163365 system_pods.go:89] "storage-provisioner" [7f7257d3-5304-463e-8691-4cbb1bcfea10] Running
	I0731 20:35:47.326366  163365 system_pods.go:126] duration metric: took 201.719754ms to wait for k8s-apps to be running ...
	I0731 20:35:47.326375  163365 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 20:35:47.326434  163365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:35:47.341802  163365 system_svc.go:56] duration metric: took 15.412571ms WaitForService to wait for kubelet
	I0731 20:35:47.341838  163365 kubeadm.go:582] duration metric: took 12.618623146s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 20:35:47.341864  163365 node_conditions.go:102] verifying NodePressure condition ...
	I0731 20:35:47.524674  163365 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 20:35:47.524705  163365 node_conditions.go:123] node cpu capacity is 2
	I0731 20:35:47.524720  163365 node_conditions.go:105] duration metric: took 182.850361ms to run NodePressure ...
	I0731 20:35:47.524739  163365 start.go:241] waiting for startup goroutines ...
	I0731 20:35:47.524749  163365 start.go:246] waiting for cluster config update ...
	I0731 20:35:47.524765  163365 start.go:255] writing updated cluster config ...
	I0731 20:35:47.525103  163365 ssh_runner.go:195] Run: rm -f paused
	I0731 20:35:47.572510  163365 start.go:600] kubectl: 1.30.3, cluster: 1.24.4 (minor skew: 6)
	I0731 20:35:47.574450  163365 out.go:177] 
	W0731 20:35:47.575672  163365 out.go:239] ! /usr/local/bin/kubectl is version 1.30.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0731 20:35:47.576988  163365 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0731 20:35:47.578308  163365 out.go:177] * Done! kubectl is now configured to use "test-preload-520960" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 20:35:48 test-preload-520960 crio[689]: time="2024-07-31 20:35:48.419949046Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722458148419924043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d99806ba-edcd-47f9-85c0-80fb27ff4842 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:35:48 test-preload-520960 crio[689]: time="2024-07-31 20:35:48.420572034Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eb238e52-4b96-4f24-808a-c1445d65f461 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:35:48 test-preload-520960 crio[689]: time="2024-07-31 20:35:48.420634371Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eb238e52-4b96-4f24-808a-c1445d65f461 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:35:48 test-preload-520960 crio[689]: time="2024-07-31 20:35:48.420812888Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4896897b8237f514af467a61dac21c033018858bddbda2a1702f38a8b2688f51,PodSandboxId:7dd6acd78b6dc13fc9d5c709a1c06d11d0be99a87390cd055bfaedb1789d6c22,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722458140026252477,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-tj6pg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5efe8c81-0ebe-437a-a596-d3ccd4c4c890,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4b5fc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008b2de0c68e41436c1192faa2ada5bf89928f1abdaf97a2674d6ec983de66e0,PodSandboxId:cccf27f2ed62d6417f8320f2e9170bf87721c823bb637377570ff715b5eb128c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722458134047604134,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 7f7257d3-5304-463e-8691-4cbb1bcfea10,},Annotations:map[string]string{io.kubernetes.container.hash: 1295f20b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d09825bc6852a0bbac82c2db4464a18b89d04447cce541f3c16fead96aa88fc,PodSandboxId:cccf27f2ed62d6417f8320f2e9170bf87721c823bb637377570ff715b5eb128c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722458132928022808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 7f7257d3-5304-463e-8691-4cbb1bcfea10,},Annotations:map[string]string{io.kubernetes.container.hash: 1295f20b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3e6b167670a9e960b6df55ee769885cd64f0a411e0ae7ba0c5f0a067f1f4a25,PodSandboxId:88dd88eaee00c80257c75539572cc9172d12c6b8b8df76be08f7d2c41cb41eb7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722458132608468995,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8m2vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82419da-6568-4
50d-8548-f87cc99b66b8,},Annotations:map[string]string{io.kubernetes.container.hash: 521bf7a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed2313b0fd24ec6eabd3f10acc7b8186101b8fa69a06b0e1f75a55f46151c48,PodSandboxId:74b127ef5699e44201464ca1d028ab2cbfa02eafa5ae3391a1ab05c78b3995fb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722458127720818704,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-520960,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b934e2d786526f512cccd4
c348ad08ae,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ce5a32e3f1c145cb674b3d5d3d1eca1e9f98c54b68812a60ac2105392020d24,PodSandboxId:c7fa406ad2707a2591dd91c88b2eab0b6879a36af7b6700872e56d65b9cce981,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722458127652847979,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-520960,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6cfc9697a44f739e6c93e22f6a2ba54,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 79f7a635,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7c19785e9f7eb708234c1bf16614780cb6e93ac1aea8528c97d5393763484e,PodSandboxId:99a485c72cfbbcd5feae60f0f6371a845367779c9969588afb8bcdca17b6fc10,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722458127673458949,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-520960,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc1d996800afd466fddc9ae443870349,},Annotations:
map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f61c05e770a304d8bbd92701ad64897d67ec0f815b135fb94ee4b43355aadd95,PodSandboxId:de5c7dbe8d39bc71a91bb7b7f0f7fa1a13352548ac06cb3f6f783a667d0b6734,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722458127616202005,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-520960,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c54c2475177eb40b1036d5cf811f30ab,},Annotations:map[string]
string{io.kubernetes.container.hash: 3e56e4f3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eb238e52-4b96-4f24-808a-c1445d65f461 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:35:48 test-preload-520960 crio[689]: time="2024-07-31 20:35:48.461592225Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f579c6a5-4575-44d2-83d8-1084722bf824 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:35:48 test-preload-520960 crio[689]: time="2024-07-31 20:35:48.461675808Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f579c6a5-4575-44d2-83d8-1084722bf824 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:35:48 test-preload-520960 crio[689]: time="2024-07-31 20:35:48.462726652Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=965432d0-5afc-417f-9fba-d7157586ae5e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:35:48 test-preload-520960 crio[689]: time="2024-07-31 20:35:48.463150566Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722458148463126504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=965432d0-5afc-417f-9fba-d7157586ae5e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:35:48 test-preload-520960 crio[689]: time="2024-07-31 20:35:48.463729960Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=13fc2794-02d3-4a76-8485-be9ef2fddd10 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:35:48 test-preload-520960 crio[689]: time="2024-07-31 20:35:48.463781234Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=13fc2794-02d3-4a76-8485-be9ef2fddd10 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:35:48 test-preload-520960 crio[689]: time="2024-07-31 20:35:48.463940750Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4896897b8237f514af467a61dac21c033018858bddbda2a1702f38a8b2688f51,PodSandboxId:7dd6acd78b6dc13fc9d5c709a1c06d11d0be99a87390cd055bfaedb1789d6c22,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722458140026252477,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-tj6pg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5efe8c81-0ebe-437a-a596-d3ccd4c4c890,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4b5fc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008b2de0c68e41436c1192faa2ada5bf89928f1abdaf97a2674d6ec983de66e0,PodSandboxId:cccf27f2ed62d6417f8320f2e9170bf87721c823bb637377570ff715b5eb128c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722458134047604134,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 7f7257d3-5304-463e-8691-4cbb1bcfea10,},Annotations:map[string]string{io.kubernetes.container.hash: 1295f20b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d09825bc6852a0bbac82c2db4464a18b89d04447cce541f3c16fead96aa88fc,PodSandboxId:cccf27f2ed62d6417f8320f2e9170bf87721c823bb637377570ff715b5eb128c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722458132928022808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 7f7257d3-5304-463e-8691-4cbb1bcfea10,},Annotations:map[string]string{io.kubernetes.container.hash: 1295f20b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3e6b167670a9e960b6df55ee769885cd64f0a411e0ae7ba0c5f0a067f1f4a25,PodSandboxId:88dd88eaee00c80257c75539572cc9172d12c6b8b8df76be08f7d2c41cb41eb7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722458132608468995,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8m2vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82419da-6568-4
50d-8548-f87cc99b66b8,},Annotations:map[string]string{io.kubernetes.container.hash: 521bf7a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed2313b0fd24ec6eabd3f10acc7b8186101b8fa69a06b0e1f75a55f46151c48,PodSandboxId:74b127ef5699e44201464ca1d028ab2cbfa02eafa5ae3391a1ab05c78b3995fb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722458127720818704,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-520960,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b934e2d786526f512cccd4
c348ad08ae,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ce5a32e3f1c145cb674b3d5d3d1eca1e9f98c54b68812a60ac2105392020d24,PodSandboxId:c7fa406ad2707a2591dd91c88b2eab0b6879a36af7b6700872e56d65b9cce981,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722458127652847979,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-520960,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6cfc9697a44f739e6c93e22f6a2ba54,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 79f7a635,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7c19785e9f7eb708234c1bf16614780cb6e93ac1aea8528c97d5393763484e,PodSandboxId:99a485c72cfbbcd5feae60f0f6371a845367779c9969588afb8bcdca17b6fc10,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722458127673458949,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-520960,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc1d996800afd466fddc9ae443870349,},Annotations:
map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f61c05e770a304d8bbd92701ad64897d67ec0f815b135fb94ee4b43355aadd95,PodSandboxId:de5c7dbe8d39bc71a91bb7b7f0f7fa1a13352548ac06cb3f6f783a667d0b6734,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722458127616202005,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-520960,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c54c2475177eb40b1036d5cf811f30ab,},Annotations:map[string]
string{io.kubernetes.container.hash: 3e56e4f3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=13fc2794-02d3-4a76-8485-be9ef2fddd10 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:35:48 test-preload-520960 crio[689]: time="2024-07-31 20:35:48.500184647Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=974ad36d-5c5f-4faa-bd5a-6100c1e297e9 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:35:48 test-preload-520960 crio[689]: time="2024-07-31 20:35:48.500267134Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=974ad36d-5c5f-4faa-bd5a-6100c1e297e9 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:35:48 test-preload-520960 crio[689]: time="2024-07-31 20:35:48.501258737Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9f707a4b-3cac-4311-9229-1c115b6d5f6b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:35:48 test-preload-520960 crio[689]: time="2024-07-31 20:35:48.501764927Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722458148501743674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9f707a4b-3cac-4311-9229-1c115b6d5f6b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:35:48 test-preload-520960 crio[689]: time="2024-07-31 20:35:48.502183136Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=04ecf072-34b6-4c46-ae73-1fc8174745b3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:35:48 test-preload-520960 crio[689]: time="2024-07-31 20:35:48.502235626Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=04ecf072-34b6-4c46-ae73-1fc8174745b3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:35:48 test-preload-520960 crio[689]: time="2024-07-31 20:35:48.502419382Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4896897b8237f514af467a61dac21c033018858bddbda2a1702f38a8b2688f51,PodSandboxId:7dd6acd78b6dc13fc9d5c709a1c06d11d0be99a87390cd055bfaedb1789d6c22,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722458140026252477,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-tj6pg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5efe8c81-0ebe-437a-a596-d3ccd4c4c890,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4b5fc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008b2de0c68e41436c1192faa2ada5bf89928f1abdaf97a2674d6ec983de66e0,PodSandboxId:cccf27f2ed62d6417f8320f2e9170bf87721c823bb637377570ff715b5eb128c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722458134047604134,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 7f7257d3-5304-463e-8691-4cbb1bcfea10,},Annotations:map[string]string{io.kubernetes.container.hash: 1295f20b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d09825bc6852a0bbac82c2db4464a18b89d04447cce541f3c16fead96aa88fc,PodSandboxId:cccf27f2ed62d6417f8320f2e9170bf87721c823bb637377570ff715b5eb128c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722458132928022808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 7f7257d3-5304-463e-8691-4cbb1bcfea10,},Annotations:map[string]string{io.kubernetes.container.hash: 1295f20b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3e6b167670a9e960b6df55ee769885cd64f0a411e0ae7ba0c5f0a067f1f4a25,PodSandboxId:88dd88eaee00c80257c75539572cc9172d12c6b8b8df76be08f7d2c41cb41eb7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722458132608468995,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8m2vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82419da-6568-4
50d-8548-f87cc99b66b8,},Annotations:map[string]string{io.kubernetes.container.hash: 521bf7a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed2313b0fd24ec6eabd3f10acc7b8186101b8fa69a06b0e1f75a55f46151c48,PodSandboxId:74b127ef5699e44201464ca1d028ab2cbfa02eafa5ae3391a1ab05c78b3995fb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722458127720818704,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-520960,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b934e2d786526f512cccd4
c348ad08ae,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ce5a32e3f1c145cb674b3d5d3d1eca1e9f98c54b68812a60ac2105392020d24,PodSandboxId:c7fa406ad2707a2591dd91c88b2eab0b6879a36af7b6700872e56d65b9cce981,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722458127652847979,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-520960,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6cfc9697a44f739e6c93e22f6a2ba54,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 79f7a635,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7c19785e9f7eb708234c1bf16614780cb6e93ac1aea8528c97d5393763484e,PodSandboxId:99a485c72cfbbcd5feae60f0f6371a845367779c9969588afb8bcdca17b6fc10,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722458127673458949,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-520960,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc1d996800afd466fddc9ae443870349,},Annotations:
map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f61c05e770a304d8bbd92701ad64897d67ec0f815b135fb94ee4b43355aadd95,PodSandboxId:de5c7dbe8d39bc71a91bb7b7f0f7fa1a13352548ac06cb3f6f783a667d0b6734,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722458127616202005,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-520960,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c54c2475177eb40b1036d5cf811f30ab,},Annotations:map[string]
string{io.kubernetes.container.hash: 3e56e4f3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=04ecf072-34b6-4c46-ae73-1fc8174745b3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:35:48 test-preload-520960 crio[689]: time="2024-07-31 20:35:48.536998964Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=582d2153-83e4-4525-9dc2-c55621063709 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:35:48 test-preload-520960 crio[689]: time="2024-07-31 20:35:48.537082819Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=582d2153-83e4-4525-9dc2-c55621063709 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:35:48 test-preload-520960 crio[689]: time="2024-07-31 20:35:48.537996844Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6c4b5fa7-9a2f-414e-8aef-60b1d21c87d1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:35:48 test-preload-520960 crio[689]: time="2024-07-31 20:35:48.538400567Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722458148538381994,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6c4b5fa7-9a2f-414e-8aef-60b1d21c87d1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:35:48 test-preload-520960 crio[689]: time="2024-07-31 20:35:48.539167270Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17f18188-05d7-424c-b4be-077f0768cda0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:35:48 test-preload-520960 crio[689]: time="2024-07-31 20:35:48.539231442Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17f18188-05d7-424c-b4be-077f0768cda0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:35:48 test-preload-520960 crio[689]: time="2024-07-31 20:35:48.539402291Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4896897b8237f514af467a61dac21c033018858bddbda2a1702f38a8b2688f51,PodSandboxId:7dd6acd78b6dc13fc9d5c709a1c06d11d0be99a87390cd055bfaedb1789d6c22,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722458140026252477,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-tj6pg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5efe8c81-0ebe-437a-a596-d3ccd4c4c890,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4b5fc5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:008b2de0c68e41436c1192faa2ada5bf89928f1abdaf97a2674d6ec983de66e0,PodSandboxId:cccf27f2ed62d6417f8320f2e9170bf87721c823bb637377570ff715b5eb128c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722458134047604134,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 7f7257d3-5304-463e-8691-4cbb1bcfea10,},Annotations:map[string]string{io.kubernetes.container.hash: 1295f20b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d09825bc6852a0bbac82c2db4464a18b89d04447cce541f3c16fead96aa88fc,PodSandboxId:cccf27f2ed62d6417f8320f2e9170bf87721c823bb637377570ff715b5eb128c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722458132928022808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 7f7257d3-5304-463e-8691-4cbb1bcfea10,},Annotations:map[string]string{io.kubernetes.container.hash: 1295f20b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3e6b167670a9e960b6df55ee769885cd64f0a411e0ae7ba0c5f0a067f1f4a25,PodSandboxId:88dd88eaee00c80257c75539572cc9172d12c6b8b8df76be08f7d2c41cb41eb7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722458132608468995,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8m2vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c82419da-6568-4
50d-8548-f87cc99b66b8,},Annotations:map[string]string{io.kubernetes.container.hash: 521bf7a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed2313b0fd24ec6eabd3f10acc7b8186101b8fa69a06b0e1f75a55f46151c48,PodSandboxId:74b127ef5699e44201464ca1d028ab2cbfa02eafa5ae3391a1ab05c78b3995fb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722458127720818704,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-520960,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b934e2d786526f512cccd4
c348ad08ae,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ce5a32e3f1c145cb674b3d5d3d1eca1e9f98c54b68812a60ac2105392020d24,PodSandboxId:c7fa406ad2707a2591dd91c88b2eab0b6879a36af7b6700872e56d65b9cce981,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722458127652847979,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-520960,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6cfc9697a44f739e6c93e22f6a2ba54,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 79f7a635,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa7c19785e9f7eb708234c1bf16614780cb6e93ac1aea8528c97d5393763484e,PodSandboxId:99a485c72cfbbcd5feae60f0f6371a845367779c9969588afb8bcdca17b6fc10,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722458127673458949,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-520960,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc1d996800afd466fddc9ae443870349,},Annotations:
map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f61c05e770a304d8bbd92701ad64897d67ec0f815b135fb94ee4b43355aadd95,PodSandboxId:de5c7dbe8d39bc71a91bb7b7f0f7fa1a13352548ac06cb3f6f783a667d0b6734,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722458127616202005,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-520960,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c54c2475177eb40b1036d5cf811f30ab,},Annotations:map[string]
string{io.kubernetes.container.hash: 3e56e4f3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=17f18188-05d7-424c-b4be-077f0768cda0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4896897b8237f       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   8 seconds ago       Running             coredns                   1                   7dd6acd78b6dc       coredns-6d4b75cb6d-tj6pg
	008b2de0c68e4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       2                   cccf27f2ed62d       storage-provisioner
	8d09825bc6852       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Exited              storage-provisioner       1                   cccf27f2ed62d       storage-provisioner
	e3e6b167670a9       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   16 seconds ago      Running             kube-proxy                1                   88dd88eaee00c       kube-proxy-8m2vz
	1ed2313b0fd24       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   20 seconds ago      Running             kube-scheduler            1                   74b127ef5699e       kube-scheduler-test-preload-520960
	fa7c19785e9f7       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   20 seconds ago      Running             kube-controller-manager   1                   99a485c72cfbb       kube-controller-manager-test-preload-520960
	3ce5a32e3f1c1       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   20 seconds ago      Running             etcd                      1                   c7fa406ad2707       etcd-test-preload-520960
	f61c05e770a30       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   de5c7dbe8d39b       kube-apiserver-test-preload-520960
	
	
	==> coredns [4896897b8237f514af467a61dac21c033018858bddbda2a1702f38a8b2688f51] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:43796 - 60371 "HINFO IN 196005882331926357.6449386572991903520. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015143599s
	
	
	==> describe nodes <==
	Name:               test-preload-520960
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-520960
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825
	                    minikube.k8s.io/name=test-preload-520960
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T20_34_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 20:34:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-520960
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 20:35:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 20:35:41 +0000   Wed, 31 Jul 2024 20:34:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 20:35:41 +0000   Wed, 31 Jul 2024 20:34:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 20:35:41 +0000   Wed, 31 Jul 2024 20:34:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 20:35:41 +0000   Wed, 31 Jul 2024 20:35:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.177
	  Hostname:    test-preload-520960
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1012639236844144af02fa127c8952d0
	  System UUID:                10126392-3684-4144-af02-fa127c8952d0
	  Boot ID:                    b9fbc92c-7faa-4643-ac87-978b55df70b0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-tj6pg                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     87s
	  kube-system                 etcd-test-preload-520960                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         100s
	  kube-system                 kube-apiserver-test-preload-520960             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 kube-controller-manager-test-preload-520960    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 kube-proxy-8m2vz                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-scheduler-test-preload-520960             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15s                kube-proxy       
	  Normal  Starting                 86s                kube-proxy       
	  Normal  Starting                 100s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  100s               kubelet          Node test-preload-520960 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    100s               kubelet          Node test-preload-520960 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     100s               kubelet          Node test-preload-520960 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  100s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                90s                kubelet          Node test-preload-520960 status is now: NodeReady
	  Normal  RegisteredNode           88s                node-controller  Node test-preload-520960 event: Registered Node test-preload-520960 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21s (x8 over 22s)  kubelet          Node test-preload-520960 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 22s)  kubelet          Node test-preload-520960 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 22s)  kubelet          Node test-preload-520960 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s                 node-controller  Node test-preload-520960 event: Registered Node test-preload-520960 in Controller
	
	
	==> dmesg <==
	[Jul31 20:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050865] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040434] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.752948] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Jul31 20:35] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.572901] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.069737] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.057168] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052525] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.179583] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.155001] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.285325] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[ +11.696650] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.063715] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.776246] systemd-fstab-generator[1078]: Ignoring "noauto" option for root device
	[  +5.861012] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.214063] systemd-fstab-generator[1766]: Ignoring "noauto" option for root device
	[  +5.034708] kauditd_printk_skb: 58 callbacks suppressed
	
	
	==> etcd [3ce5a32e3f1c145cb674b3d5d3d1eca1e9f98c54b68812a60ac2105392020d24] <==
	{"level":"info","ts":"2024-07-31T20:35:28.079Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"b3a0188682bd7022","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-31T20:35:28.096Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T20:35:28.101Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b3a0188682bd7022","initial-advertise-peer-urls":["https://192.168.39.177:2380"],"listen-peer-urls":["https://192.168.39.177:2380"],"advertise-client-urls":["https://192.168.39.177:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.177:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T20:35:28.101Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T20:35:28.101Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-31T20:35:28.101Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.177:2380"}
	{"level":"info","ts":"2024-07-31T20:35:28.101Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.177:2380"}
	{"level":"info","ts":"2024-07-31T20:35:28.104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b3a0188682bd7022 switched to configuration voters=(12943372295060942882)"}
	{"level":"info","ts":"2024-07-31T20:35:28.104Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e6df60d153d3d688","local-member-id":"b3a0188682bd7022","added-peer-id":"b3a0188682bd7022","added-peer-peer-urls":["https://192.168.39.177:2380"]}
	{"level":"info","ts":"2024-07-31T20:35:28.104Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e6df60d153d3d688","local-member-id":"b3a0188682bd7022","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T20:35:28.104Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T20:35:29.032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b3a0188682bd7022 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T20:35:29.032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b3a0188682bd7022 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T20:35:29.032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b3a0188682bd7022 received MsgPreVoteResp from b3a0188682bd7022 at term 2"}
	{"level":"info","ts":"2024-07-31T20:35:29.032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b3a0188682bd7022 became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T20:35:29.032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b3a0188682bd7022 received MsgVoteResp from b3a0188682bd7022 at term 3"}
	{"level":"info","ts":"2024-07-31T20:35:29.032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b3a0188682bd7022 became leader at term 3"}
	{"level":"info","ts":"2024-07-31T20:35:29.032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b3a0188682bd7022 elected leader b3a0188682bd7022 at term 3"}
	{"level":"info","ts":"2024-07-31T20:35:29.032Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"b3a0188682bd7022","local-member-attributes":"{Name:test-preload-520960 ClientURLs:[https://192.168.39.177:2379]}","request-path":"/0/members/b3a0188682bd7022/attributes","cluster-id":"e6df60d153d3d688","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T20:35:29.033Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T20:35:29.034Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.177:2379"}
	{"level":"info","ts":"2024-07-31T20:35:29.034Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T20:35:29.035Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T20:35:29.035Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T20:35:29.035Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 20:35:48 up 0 min,  0 users,  load average: 0.64, 0.17, 0.06
	Linux test-preload-520960 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f61c05e770a304d8bbd92701ad64897d67ec0f815b135fb94ee4b43355aadd95] <==
	I0731 20:35:31.441252       1 controller.go:85] Starting OpenAPI V3 controller
	I0731 20:35:31.441284       1 naming_controller.go:291] Starting NamingConditionController
	I0731 20:35:31.442564       1 establishing_controller.go:76] Starting EstablishingController
	I0731 20:35:31.442622       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0731 20:35:31.442644       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0731 20:35:31.442661       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0731 20:35:31.509176       1 shared_informer.go:262] Caches are synced for crd-autoregister
	E0731 20:35:31.513421       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0731 20:35:31.543446       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0731 20:35:31.587340       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 20:35:31.597824       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0731 20:35:31.599631       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0731 20:35:31.600049       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 20:35:31.600685       1 cache.go:39] Caches are synced for autoregister controller
	I0731 20:35:31.607790       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 20:35:32.094004       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0731 20:35:32.406147       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 20:35:32.974501       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0731 20:35:33.214777       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0731 20:35:33.224497       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0731 20:35:33.258685       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0731 20:35:33.282855       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 20:35:33.300649       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 20:35:44.615330       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 20:35:44.621508       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [fa7c19785e9f7eb708234c1bf16614780cb6e93ac1aea8528c97d5393763484e] <==
	I0731 20:35:44.584777       1 shared_informer.go:262] Caches are synced for crt configmap
	I0731 20:35:44.587993       1 shared_informer.go:262] Caches are synced for taint
	I0731 20:35:44.588246       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0731 20:35:44.588990       1 event.go:294] "Event occurred" object="test-preload-520960" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-520960 event: Registered Node test-preload-520960 in Controller"
	I0731 20:35:44.589628       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0731 20:35:44.589811       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-520960. Assuming now as a timestamp.
	I0731 20:35:44.589866       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0731 20:35:44.592579       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0731 20:35:44.599400       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0731 20:35:44.605199       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0731 20:35:44.608970       1 shared_informer.go:262] Caches are synced for HPA
	I0731 20:35:44.610043       1 shared_informer.go:262] Caches are synced for endpoint
	I0731 20:35:44.621824       1 shared_informer.go:262] Caches are synced for namespace
	I0731 20:35:44.681735       1 shared_informer.go:262] Caches are synced for service account
	I0731 20:35:44.689589       1 shared_informer.go:262] Caches are synced for deployment
	I0731 20:35:44.695691       1 shared_informer.go:262] Caches are synced for disruption
	I0731 20:35:44.696645       1 disruption.go:371] Sending events to api server.
	I0731 20:35:44.736332       1 shared_informer.go:262] Caches are synced for resource quota
	I0731 20:35:44.767362       1 shared_informer.go:262] Caches are synced for cronjob
	I0731 20:35:44.783562       1 shared_informer.go:262] Caches are synced for stateful set
	I0731 20:35:44.783770       1 shared_informer.go:262] Caches are synced for resource quota
	I0731 20:35:44.809267       1 shared_informer.go:262] Caches are synced for daemon sets
	I0731 20:35:45.232189       1 shared_informer.go:262] Caches are synced for garbage collector
	I0731 20:35:45.268199       1 shared_informer.go:262] Caches are synced for garbage collector
	I0731 20:35:45.268233       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [e3e6b167670a9e960b6df55ee769885cd64f0a411e0ae7ba0c5f0a067f1f4a25] <==
	I0731 20:35:32.899352       1 node.go:163] Successfully retrieved node IP: 192.168.39.177
	I0731 20:35:32.899750       1 server_others.go:138] "Detected node IP" address="192.168.39.177"
	I0731 20:35:32.899929       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0731 20:35:32.962574       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0731 20:35:32.962741       1 server_others.go:206] "Using iptables Proxier"
	I0731 20:35:32.962885       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0731 20:35:32.964409       1 server.go:661] "Version info" version="v1.24.4"
	I0731 20:35:32.964459       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 20:35:32.966444       1 config.go:317] "Starting service config controller"
	I0731 20:35:32.974045       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0731 20:35:32.967745       1 config.go:444] "Starting node config controller"
	I0731 20:35:32.974227       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0731 20:35:32.974252       1 shared_informer.go:262] Caches are synced for node config
	I0731 20:35:32.972681       1 config.go:226] "Starting endpoint slice config controller"
	I0731 20:35:32.974261       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0731 20:35:33.078311       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0731 20:35:33.078397       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [1ed2313b0fd24ec6eabd3f10acc7b8186101b8fa69a06b0e1f75a55f46151c48] <==
	I0731 20:35:28.917285       1 serving.go:348] Generated self-signed cert in-memory
	W0731 20:35:31.500999       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 20:35:31.501158       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 20:35:31.501344       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 20:35:31.501468       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 20:35:31.539675       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0731 20:35:31.539788       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 20:35:31.545788       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0731 20:35:31.546467       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 20:35:31.547300       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 20:35:31.547377       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 20:35:31.647636       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 20:35:31 test-preload-520960 kubelet[1085]: I0731 20:35:31.888192    1085 topology_manager.go:200] "Topology Admit Handler"
	Jul 31 20:35:31 test-preload-520960 kubelet[1085]: I0731 20:35:31.888825    1085 topology_manager.go:200] "Topology Admit Handler"
	Jul 31 20:35:31 test-preload-520960 kubelet[1085]: E0731 20:35:31.888624    1085 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-tj6pg" podUID=5efe8c81-0ebe-437a-a596-d3ccd4c4c890
	Jul 31 20:35:31 test-preload-520960 kubelet[1085]: I0731 20:35:31.932021    1085 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c82419da-6568-450d-8548-f87cc99b66b8-lib-modules\") pod \"kube-proxy-8m2vz\" (UID: \"c82419da-6568-450d-8548-f87cc99b66b8\") " pod="kube-system/kube-proxy-8m2vz"
	Jul 31 20:35:31 test-preload-520960 kubelet[1085]: I0731 20:35:31.932635    1085 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74zqs\" (UniqueName: \"kubernetes.io/projected/c82419da-6568-450d-8548-f87cc99b66b8-kube-api-access-74zqs\") pod \"kube-proxy-8m2vz\" (UID: \"c82419da-6568-450d-8548-f87cc99b66b8\") " pod="kube-system/kube-proxy-8m2vz"
	Jul 31 20:35:31 test-preload-520960 kubelet[1085]: E0731 20:35:31.933425    1085 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jul 31 20:35:31 test-preload-520960 kubelet[1085]: I0731 20:35:31.933643    1085 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5efe8c81-0ebe-437a-a596-d3ccd4c4c890-config-volume\") pod \"coredns-6d4b75cb6d-tj6pg\" (UID: \"5efe8c81-0ebe-437a-a596-d3ccd4c4c890\") " pod="kube-system/coredns-6d4b75cb6d-tj6pg"
	Jul 31 20:35:31 test-preload-520960 kubelet[1085]: I0731 20:35:31.933938    1085 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77ktc\" (UniqueName: \"kubernetes.io/projected/5efe8c81-0ebe-437a-a596-d3ccd4c4c890-kube-api-access-77ktc\") pod \"coredns-6d4b75cb6d-tj6pg\" (UID: \"5efe8c81-0ebe-437a-a596-d3ccd4c4c890\") " pod="kube-system/coredns-6d4b75cb6d-tj6pg"
	Jul 31 20:35:31 test-preload-520960 kubelet[1085]: I0731 20:35:31.934079    1085 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc9k4\" (UniqueName: \"kubernetes.io/projected/7f7257d3-5304-463e-8691-4cbb1bcfea10-kube-api-access-mc9k4\") pod \"storage-provisioner\" (UID: \"7f7257d3-5304-463e-8691-4cbb1bcfea10\") " pod="kube-system/storage-provisioner"
	Jul 31 20:35:31 test-preload-520960 kubelet[1085]: I0731 20:35:31.934213    1085 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c82419da-6568-450d-8548-f87cc99b66b8-kube-proxy\") pod \"kube-proxy-8m2vz\" (UID: \"c82419da-6568-450d-8548-f87cc99b66b8\") " pod="kube-system/kube-proxy-8m2vz"
	Jul 31 20:35:31 test-preload-520960 kubelet[1085]: I0731 20:35:31.934337    1085 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7f7257d3-5304-463e-8691-4cbb1bcfea10-tmp\") pod \"storage-provisioner\" (UID: \"7f7257d3-5304-463e-8691-4cbb1bcfea10\") " pod="kube-system/storage-provisioner"
	Jul 31 20:35:31 test-preload-520960 kubelet[1085]: I0731 20:35:31.934459    1085 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c82419da-6568-450d-8548-f87cc99b66b8-xtables-lock\") pod \"kube-proxy-8m2vz\" (UID: \"c82419da-6568-450d-8548-f87cc99b66b8\") " pod="kube-system/kube-proxy-8m2vz"
	Jul 31 20:35:31 test-preload-520960 kubelet[1085]: I0731 20:35:31.934606    1085 reconciler.go:159] "Reconciler: start to sync state"
	Jul 31 20:35:32 test-preload-520960 kubelet[1085]: E0731 20:35:32.039857    1085 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 31 20:35:32 test-preload-520960 kubelet[1085]: E0731 20:35:32.040008    1085 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/5efe8c81-0ebe-437a-a596-d3ccd4c4c890-config-volume podName:5efe8c81-0ebe-437a-a596-d3ccd4c4c890 nodeName:}" failed. No retries permitted until 2024-07-31 20:35:32.539973492 +0000 UTC m=+5.772832492 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5efe8c81-0ebe-437a-a596-d3ccd4c4c890-config-volume") pod "coredns-6d4b75cb6d-tj6pg" (UID: "5efe8c81-0ebe-437a-a596-d3ccd4c4c890") : object "kube-system"/"coredns" not registered
	Jul 31 20:35:32 test-preload-520960 kubelet[1085]: E0731 20:35:32.545499    1085 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 31 20:35:32 test-preload-520960 kubelet[1085]: E0731 20:35:32.545599    1085 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/5efe8c81-0ebe-437a-a596-d3ccd4c4c890-config-volume podName:5efe8c81-0ebe-437a-a596-d3ccd4c4c890 nodeName:}" failed. No retries permitted until 2024-07-31 20:35:33.545584873 +0000 UTC m=+6.778443881 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5efe8c81-0ebe-437a-a596-d3ccd4c4c890-config-volume") pod "coredns-6d4b75cb6d-tj6pg" (UID: "5efe8c81-0ebe-437a-a596-d3ccd4c4c890") : object "kube-system"/"coredns" not registered
	Jul 31 20:35:33 test-preload-520960 kubelet[1085]: E0731 20:35:33.554409    1085 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 31 20:35:33 test-preload-520960 kubelet[1085]: E0731 20:35:33.554488    1085 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/5efe8c81-0ebe-437a-a596-d3ccd4c4c890-config-volume podName:5efe8c81-0ebe-437a-a596-d3ccd4c4c890 nodeName:}" failed. No retries permitted until 2024-07-31 20:35:35.55446654 +0000 UTC m=+8.787325535 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5efe8c81-0ebe-437a-a596-d3ccd4c4c890-config-volume") pod "coredns-6d4b75cb6d-tj6pg" (UID: "5efe8c81-0ebe-437a-a596-d3ccd4c4c890") : object "kube-system"/"coredns" not registered
	Jul 31 20:35:33 test-preload-520960 kubelet[1085]: E0731 20:35:33.996774    1085 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-tj6pg" podUID=5efe8c81-0ebe-437a-a596-d3ccd4c4c890
	Jul 31 20:35:34 test-preload-520960 kubelet[1085]: I0731 20:35:34.036847    1085 scope.go:110] "RemoveContainer" containerID="8d09825bc6852a0bbac82c2db4464a18b89d04447cce541f3c16fead96aa88fc"
	Jul 31 20:35:35 test-preload-520960 kubelet[1085]: I0731 20:35:35.002745    1085 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=be15ce55-add2-4acc-873f-3c7a71eac2ba path="/var/lib/kubelet/pods/be15ce55-add2-4acc-873f-3c7a71eac2ba/volumes"
	Jul 31 20:35:35 test-preload-520960 kubelet[1085]: E0731 20:35:35.572216    1085 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 31 20:35:35 test-preload-520960 kubelet[1085]: E0731 20:35:35.572322    1085 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/5efe8c81-0ebe-437a-a596-d3ccd4c4c890-config-volume podName:5efe8c81-0ebe-437a-a596-d3ccd4c4c890 nodeName:}" failed. No retries permitted until 2024-07-31 20:35:39.572303655 +0000 UTC m=+12.805162652 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5efe8c81-0ebe-437a-a596-d3ccd4c4c890-config-volume") pod "coredns-6d4b75cb6d-tj6pg" (UID: "5efe8c81-0ebe-437a-a596-d3ccd4c4c890") : object "kube-system"/"coredns" not registered
	Jul 31 20:35:35 test-preload-520960 kubelet[1085]: E0731 20:35:35.999414    1085 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-tj6pg" podUID=5efe8c81-0ebe-437a-a596-d3ccd4c4c890
	
	
	==> storage-provisioner [008b2de0c68e41436c1192faa2ada5bf89928f1abdaf97a2674d6ec983de66e0] <==
	I0731 20:35:34.132130       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 20:35:34.141012       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 20:35:34.141057       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [8d09825bc6852a0bbac82c2db4464a18b89d04447cce541f3c16fead96aa88fc] <==
	I0731 20:35:33.091656       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0731 20:35:33.095864       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-520960 -n test-preload-520960
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-520960 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-520960" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-520960
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-520960: (1.097991986s)
--- FAIL: TestPreload (256.70s)

                                                
                                    
x
+
TestKubernetesUpgrade (432.26s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-519871 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-519871 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m58.098330931s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-519871] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19355
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-519871" primary control-plane node in "kubernetes-upgrade-519871" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 20:40:50.921924  169852 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:40:50.922196  169852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:40:50.922205  169852 out.go:304] Setting ErrFile to fd 2...
	I0731 20:40:50.922209  169852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:40:50.922449  169852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 20:40:50.923065  169852 out.go:298] Setting JSON to false
	I0731 20:40:50.924061  169852 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8587,"bootTime":1722449864,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 20:40:50.924124  169852 start.go:139] virtualization: kvm guest
	I0731 20:40:50.926349  169852 out.go:177] * [kubernetes-upgrade-519871] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 20:40:50.927613  169852 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 20:40:50.927643  169852 notify.go:220] Checking for updates...
	I0731 20:40:50.929940  169852 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 20:40:50.931212  169852 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 20:40:50.932587  169852 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 20:40:50.933934  169852 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 20:40:50.935120  169852 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 20:40:50.936624  169852 config.go:182] Loaded profile config "NoKubernetes-938926": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0731 20:40:50.936714  169852 config.go:182] Loaded profile config "cert-expiration-812046": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:40:50.936793  169852 config.go:182] Loaded profile config "running-upgrade-437728": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0731 20:40:50.936878  169852 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 20:40:50.974741  169852 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 20:40:50.975978  169852 start.go:297] selected driver: kvm2
	I0731 20:40:50.975993  169852 start.go:901] validating driver "kvm2" against <nil>
	I0731 20:40:50.976008  169852 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 20:40:50.976993  169852 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:40:50.977104  169852 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-121704/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 20:40:50.992208  169852 install.go:137] /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0731 20:40:50.992260  169852 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 20:40:50.992475  169852 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 20:40:50.992532  169852 cni.go:84] Creating CNI manager for ""
	I0731 20:40:50.992546  169852 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:40:50.992553  169852 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 20:40:50.992602  169852 start.go:340] cluster config:
	{Name:kubernetes-upgrade-519871 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-519871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:40:50.992693  169852 iso.go:125] acquiring lock: {Name:mk69fc0fe37180b19ce91307b5e12ab2d0bd69fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:40:50.994717  169852 out.go:177] * Starting "kubernetes-upgrade-519871" primary control-plane node in "kubernetes-upgrade-519871" cluster
	I0731 20:40:50.996170  169852 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 20:40:50.996227  169852 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0731 20:40:50.996237  169852 cache.go:56] Caching tarball of preloaded images
	I0731 20:40:50.996320  169852 preload.go:172] Found /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 20:40:50.996340  169852 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0731 20:40:50.996432  169852 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kubernetes-upgrade-519871/config.json ...
	I0731 20:40:50.996449  169852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kubernetes-upgrade-519871/config.json: {Name:mkc3687b9daaac9d8604eae4ec07e8632ad09652 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:40:50.996601  169852 start.go:360] acquireMachinesLock for kubernetes-upgrade-519871: {Name:mk0ee20c9dba367bb5e62f6affdfe6f589095d2a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 20:41:16.489815  169852 start.go:364] duration metric: took 25.493183919s to acquireMachinesLock for "kubernetes-upgrade-519871"
	I0731 20:41:16.489915  169852 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-519871 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-519871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 20:41:16.489999  169852 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 20:41:16.492025  169852 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 20:41:16.492219  169852 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:41:16.492292  169852 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:41:16.512122  169852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37807
	I0731 20:41:16.512577  169852 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:41:16.513142  169852 main.go:141] libmachine: Using API Version  1
	I0731 20:41:16.513165  169852 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:41:16.513570  169852 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:41:16.513823  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetMachineName
	I0731 20:41:16.513985  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .DriverName
	I0731 20:41:16.514181  169852 start.go:159] libmachine.API.Create for "kubernetes-upgrade-519871" (driver="kvm2")
	I0731 20:41:16.514215  169852 client.go:168] LocalClient.Create starting
	I0731 20:41:16.514262  169852 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem
	I0731 20:41:16.514306  169852 main.go:141] libmachine: Decoding PEM data...
	I0731 20:41:16.514330  169852 main.go:141] libmachine: Parsing certificate...
	I0731 20:41:16.514408  169852 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem
	I0731 20:41:16.514437  169852 main.go:141] libmachine: Decoding PEM data...
	I0731 20:41:16.514457  169852 main.go:141] libmachine: Parsing certificate...
	I0731 20:41:16.514481  169852 main.go:141] libmachine: Running pre-create checks...
	I0731 20:41:16.514498  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .PreCreateCheck
	I0731 20:41:16.514883  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetConfigRaw
	I0731 20:41:16.515313  169852 main.go:141] libmachine: Creating machine...
	I0731 20:41:16.515328  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .Create
	I0731 20:41:16.515456  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Creating KVM machine...
	I0731 20:41:16.516530  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | found existing default KVM network
	I0731 20:41:16.517952  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | I0731 20:41:16.517802  170345 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:4c:1a:48} reservation:<nil>}
	I0731 20:41:16.518821  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | I0731 20:41:16.518746  170345 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:40:8b:30} reservation:<nil>}
	I0731 20:41:16.519553  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | I0731 20:41:16.519456  170345 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:bd:72:f0} reservation:<nil>}
	I0731 20:41:16.520539  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | I0731 20:41:16.520459  170345 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000285bf0}
	I0731 20:41:16.520564  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | created network xml: 
	I0731 20:41:16.520574  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | <network>
	I0731 20:41:16.520583  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG |   <name>mk-kubernetes-upgrade-519871</name>
	I0731 20:41:16.520615  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG |   <dns enable='no'/>
	I0731 20:41:16.520638  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG |   
	I0731 20:41:16.520656  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0731 20:41:16.520667  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG |     <dhcp>
	I0731 20:41:16.520676  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0731 20:41:16.520686  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG |     </dhcp>
	I0731 20:41:16.520693  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG |   </ip>
	I0731 20:41:16.520697  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG |   
	I0731 20:41:16.520703  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | </network>
	I0731 20:41:16.520708  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | 
	I0731 20:41:16.525769  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | trying to create private KVM network mk-kubernetes-upgrade-519871 192.168.72.0/24...
	I0731 20:41:16.597278  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Setting up store path in /home/jenkins/minikube-integration/19355-121704/.minikube/machines/kubernetes-upgrade-519871 ...
	I0731 20:41:16.597315  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | private KVM network mk-kubernetes-upgrade-519871 192.168.72.0/24 created
	I0731 20:41:16.597329  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Building disk image from file:///home/jenkins/minikube-integration/19355-121704/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0731 20:41:16.597371  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | I0731 20:41:16.597207  170345 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 20:41:16.597437  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Downloading /home/jenkins/minikube-integration/19355-121704/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19355-121704/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0731 20:41:16.841944  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | I0731 20:41:16.841828  170345 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/kubernetes-upgrade-519871/id_rsa...
	I0731 20:41:17.003708  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | I0731 20:41:17.003535  170345 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/kubernetes-upgrade-519871/kubernetes-upgrade-519871.rawdisk...
	I0731 20:41:17.003739  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | Writing magic tar header
	I0731 20:41:17.003753  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | Writing SSH key tar header
	I0731 20:41:17.003764  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | I0731 20:41:17.003679  170345 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19355-121704/.minikube/machines/kubernetes-upgrade-519871 ...
	I0731 20:41:17.003836  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/kubernetes-upgrade-519871
	I0731 20:41:17.003879  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704/.minikube/machines/kubernetes-upgrade-519871 (perms=drwx------)
	I0731 20:41:17.003901  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704/.minikube/machines
	I0731 20:41:17.003916  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704/.minikube/machines (perms=drwxr-xr-x)
	I0731 20:41:17.003933  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704/.minikube (perms=drwxr-xr-x)
	I0731 20:41:17.003946  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704 (perms=drwxrwxr-x)
	I0731 20:41:17.003959  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 20:41:17.003985  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704
	I0731 20:41:17.003998  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 20:41:17.004009  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 20:41:17.004019  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | Checking permissions on dir: /home/jenkins
	I0731 20:41:17.004034  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | Checking permissions on dir: /home
	I0731 20:41:17.004050  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 20:41:17.004064  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Creating domain...
	I0731 20:41:17.004077  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | Skipping /home - not owner
	I0731 20:41:17.005222  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) define libvirt domain using xml: 
	I0731 20:41:17.005242  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) <domain type='kvm'>
	I0731 20:41:17.005253  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)   <name>kubernetes-upgrade-519871</name>
	I0731 20:41:17.005261  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)   <memory unit='MiB'>2200</memory>
	I0731 20:41:17.005270  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)   <vcpu>2</vcpu>
	I0731 20:41:17.005276  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)   <features>
	I0731 20:41:17.005289  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)     <acpi/>
	I0731 20:41:17.005297  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)     <apic/>
	I0731 20:41:17.005302  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)     <pae/>
	I0731 20:41:17.005306  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)     
	I0731 20:41:17.005312  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)   </features>
	I0731 20:41:17.005318  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)   <cpu mode='host-passthrough'>
	I0731 20:41:17.005330  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)   
	I0731 20:41:17.005352  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)   </cpu>
	I0731 20:41:17.005365  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)   <os>
	I0731 20:41:17.005374  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)     <type>hvm</type>
	I0731 20:41:17.005382  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)     <boot dev='cdrom'/>
	I0731 20:41:17.005387  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)     <boot dev='hd'/>
	I0731 20:41:17.005393  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)     <bootmenu enable='no'/>
	I0731 20:41:17.005397  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)   </os>
	I0731 20:41:17.005402  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)   <devices>
	I0731 20:41:17.005420  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)     <disk type='file' device='cdrom'>
	I0731 20:41:17.005448  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)       <source file='/home/jenkins/minikube-integration/19355-121704/.minikube/machines/kubernetes-upgrade-519871/boot2docker.iso'/>
	I0731 20:41:17.005464  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)       <target dev='hdc' bus='scsi'/>
	I0731 20:41:17.005470  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)       <readonly/>
	I0731 20:41:17.005480  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)     </disk>
	I0731 20:41:17.005487  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)     <disk type='file' device='disk'>
	I0731 20:41:17.005498  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 20:41:17.005516  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)       <source file='/home/jenkins/minikube-integration/19355-121704/.minikube/machines/kubernetes-upgrade-519871/kubernetes-upgrade-519871.rawdisk'/>
	I0731 20:41:17.005523  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)       <target dev='hda' bus='virtio'/>
	I0731 20:41:17.005529  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)     </disk>
	I0731 20:41:17.005536  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)     <interface type='network'>
	I0731 20:41:17.005542  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)       <source network='mk-kubernetes-upgrade-519871'/>
	I0731 20:41:17.005550  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)       <model type='virtio'/>
	I0731 20:41:17.005556  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)     </interface>
	I0731 20:41:17.005563  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)     <interface type='network'>
	I0731 20:41:17.005595  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)       <source network='default'/>
	I0731 20:41:17.005620  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)       <model type='virtio'/>
	I0731 20:41:17.005628  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)     </interface>
	I0731 20:41:17.005645  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)     <serial type='pty'>
	I0731 20:41:17.005657  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)       <target port='0'/>
	I0731 20:41:17.005669  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)     </serial>
	I0731 20:41:17.005679  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)     <console type='pty'>
	I0731 20:41:17.005692  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)       <target type='serial' port='0'/>
	I0731 20:41:17.005705  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)     </console>
	I0731 20:41:17.005717  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)     <rng model='virtio'>
	I0731 20:41:17.005730  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)       <backend model='random'>/dev/random</backend>
	I0731 20:41:17.005739  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)     </rng>
	I0731 20:41:17.005745  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)     
	I0731 20:41:17.005752  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)     
	I0731 20:41:17.005761  169852 main.go:141] libmachine: (kubernetes-upgrade-519871)   </devices>
	I0731 20:41:17.005772  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) </domain>
	I0731 20:41:17.005784  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) 
	I0731 20:41:17.010085  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:cc:b4:ca in network default
	I0731 20:41:17.010701  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Ensuring networks are active...
	I0731 20:41:17.010738  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:17.011343  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Ensuring network default is active
	I0731 20:41:17.011730  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Ensuring network mk-kubernetes-upgrade-519871 is active
	I0731 20:41:17.012427  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Getting domain xml...
	I0731 20:41:17.013118  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Creating domain...
	I0731 20:41:18.227266  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Waiting to get IP...
	I0731 20:41:18.228048  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:18.228465  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | unable to find current IP address of domain kubernetes-upgrade-519871 in network mk-kubernetes-upgrade-519871
	I0731 20:41:18.228507  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | I0731 20:41:18.228444  170345 retry.go:31] will retry after 208.094288ms: waiting for machine to come up
	I0731 20:41:18.437924  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:18.438426  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | unable to find current IP address of domain kubernetes-upgrade-519871 in network mk-kubernetes-upgrade-519871
	I0731 20:41:18.438455  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | I0731 20:41:18.438368  170345 retry.go:31] will retry after 262.47921ms: waiting for machine to come up
	I0731 20:41:18.702954  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:18.703420  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | unable to find current IP address of domain kubernetes-upgrade-519871 in network mk-kubernetes-upgrade-519871
	I0731 20:41:18.703444  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | I0731 20:41:18.703385  170345 retry.go:31] will retry after 407.423738ms: waiting for machine to come up
	I0731 20:41:19.112045  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:19.112512  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | unable to find current IP address of domain kubernetes-upgrade-519871 in network mk-kubernetes-upgrade-519871
	I0731 20:41:19.112539  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | I0731 20:41:19.112456  170345 retry.go:31] will retry after 449.320798ms: waiting for machine to come up
	I0731 20:41:19.563089  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:19.563589  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | unable to find current IP address of domain kubernetes-upgrade-519871 in network mk-kubernetes-upgrade-519871
	I0731 20:41:19.563615  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | I0731 20:41:19.563531  170345 retry.go:31] will retry after 542.418226ms: waiting for machine to come up
	I0731 20:41:20.107270  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:20.107857  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | unable to find current IP address of domain kubernetes-upgrade-519871 in network mk-kubernetes-upgrade-519871
	I0731 20:41:20.107901  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | I0731 20:41:20.107798  170345 retry.go:31] will retry after 786.87954ms: waiting for machine to come up
	I0731 20:41:20.896247  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:20.896787  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | unable to find current IP address of domain kubernetes-upgrade-519871 in network mk-kubernetes-upgrade-519871
	I0731 20:41:20.896820  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | I0731 20:41:20.896740  170345 retry.go:31] will retry after 1.124486885s: waiting for machine to come up
	I0731 20:41:22.023224  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:22.023741  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | unable to find current IP address of domain kubernetes-upgrade-519871 in network mk-kubernetes-upgrade-519871
	I0731 20:41:22.023765  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | I0731 20:41:22.023686  170345 retry.go:31] will retry after 1.033878777s: waiting for machine to come up
	I0731 20:41:23.059609  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:23.060038  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | unable to find current IP address of domain kubernetes-upgrade-519871 in network mk-kubernetes-upgrade-519871
	I0731 20:41:23.060065  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | I0731 20:41:23.059987  170345 retry.go:31] will retry after 1.505094663s: waiting for machine to come up
	I0731 20:41:24.566740  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:24.567251  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | unable to find current IP address of domain kubernetes-upgrade-519871 in network mk-kubernetes-upgrade-519871
	I0731 20:41:24.567281  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | I0731 20:41:24.567195  170345 retry.go:31] will retry after 1.555733606s: waiting for machine to come up
	I0731 20:41:26.125132  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:26.125598  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | unable to find current IP address of domain kubernetes-upgrade-519871 in network mk-kubernetes-upgrade-519871
	I0731 20:41:26.125619  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | I0731 20:41:26.125560  170345 retry.go:31] will retry after 2.700220229s: waiting for machine to come up
	I0731 20:41:28.827912  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:28.828404  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | unable to find current IP address of domain kubernetes-upgrade-519871 in network mk-kubernetes-upgrade-519871
	I0731 20:41:28.828434  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | I0731 20:41:28.828342  170345 retry.go:31] will retry after 2.71197256s: waiting for machine to come up
	I0731 20:41:31.541784  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:31.542205  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | unable to find current IP address of domain kubernetes-upgrade-519871 in network mk-kubernetes-upgrade-519871
	I0731 20:41:31.542227  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | I0731 20:41:31.542155  170345 retry.go:31] will retry after 4.312436748s: waiting for machine to come up
	I0731 20:41:35.858938  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:35.859360  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | unable to find current IP address of domain kubernetes-upgrade-519871 in network mk-kubernetes-upgrade-519871
	I0731 20:41:35.859382  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | I0731 20:41:35.859314  170345 retry.go:31] will retry after 4.388129704s: waiting for machine to come up
	I0731 20:41:40.251959  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:40.252400  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has current primary IP address 192.168.72.217 and MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:40.252449  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Found IP for machine: 192.168.72.217
	I0731 20:41:40.252473  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Reserving static IP address...
	I0731 20:41:40.252803  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-519871", mac: "52:54:00:49:87:3f", ip: "192.168.72.217"} in network mk-kubernetes-upgrade-519871
	I0731 20:41:40.328759  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | Getting to WaitForSSH function...
	I0731 20:41:40.328790  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Reserved static IP address: 192.168.72.217
	I0731 20:41:40.328804  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Waiting for SSH to be available...
	I0731 20:41:40.331373  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:40.331916  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:87:3f", ip: ""} in network mk-kubernetes-upgrade-519871: {Iface:virbr4 ExpiryTime:2024-07-31 21:41:31 +0000 UTC Type:0 Mac:52:54:00:49:87:3f Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:minikube Clientid:01:52:54:00:49:87:3f}
	I0731 20:41:40.331948  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined IP address 192.168.72.217 and MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:40.331970  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | Using SSH client type: external
	I0731 20:41:40.331988  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/kubernetes-upgrade-519871/id_rsa (-rw-------)
	I0731 20:41:40.332016  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.217 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/kubernetes-upgrade-519871/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:41:40.332033  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | About to run SSH command:
	I0731 20:41:40.332049  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | exit 0
	I0731 20:41:40.462064  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | SSH cmd err, output: <nil>: 
	I0731 20:41:40.462364  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) KVM machine creation complete!
	I0731 20:41:40.462749  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetConfigRaw
	I0731 20:41:40.463300  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .DriverName
	I0731 20:41:40.463504  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .DriverName
	I0731 20:41:40.463688  169852 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 20:41:40.463703  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetState
	I0731 20:41:40.465128  169852 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 20:41:40.465144  169852 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 20:41:40.465153  169852 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 20:41:40.465163  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHHostname
	I0731 20:41:40.467609  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:40.468073  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:87:3f", ip: ""} in network mk-kubernetes-upgrade-519871: {Iface:virbr4 ExpiryTime:2024-07-31 21:41:31 +0000 UTC Type:0 Mac:52:54:00:49:87:3f Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:kubernetes-upgrade-519871 Clientid:01:52:54:00:49:87:3f}
	I0731 20:41:40.468101  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined IP address 192.168.72.217 and MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:40.468257  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHPort
	I0731 20:41:40.468416  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHKeyPath
	I0731 20:41:40.468582  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHKeyPath
	I0731 20:41:40.468756  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHUsername
	I0731 20:41:40.468949  169852 main.go:141] libmachine: Using SSH client type: native
	I0731 20:41:40.469226  169852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.217 22 <nil> <nil>}
	I0731 20:41:40.469244  169852 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 20:41:40.580550  169852 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:41:40.580573  169852 main.go:141] libmachine: Detecting the provisioner...
	I0731 20:41:40.580583  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHHostname
	I0731 20:41:40.583456  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:40.583858  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:87:3f", ip: ""} in network mk-kubernetes-upgrade-519871: {Iface:virbr4 ExpiryTime:2024-07-31 21:41:31 +0000 UTC Type:0 Mac:52:54:00:49:87:3f Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:kubernetes-upgrade-519871 Clientid:01:52:54:00:49:87:3f}
	I0731 20:41:40.583895  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined IP address 192.168.72.217 and MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:40.583998  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHPort
	I0731 20:41:40.584193  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHKeyPath
	I0731 20:41:40.584408  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHKeyPath
	I0731 20:41:40.584586  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHUsername
	I0731 20:41:40.584810  169852 main.go:141] libmachine: Using SSH client type: native
	I0731 20:41:40.584961  169852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.217 22 <nil> <nil>}
	I0731 20:41:40.584971  169852 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 20:41:40.694156  169852 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 20:41:40.694227  169852 main.go:141] libmachine: found compatible host: buildroot
	I0731 20:41:40.694236  169852 main.go:141] libmachine: Provisioning with buildroot...
	I0731 20:41:40.694245  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetMachineName
	I0731 20:41:40.694530  169852 buildroot.go:166] provisioning hostname "kubernetes-upgrade-519871"
	I0731 20:41:40.694561  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetMachineName
	I0731 20:41:40.694754  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHHostname
	I0731 20:41:40.697259  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:40.697630  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:87:3f", ip: ""} in network mk-kubernetes-upgrade-519871: {Iface:virbr4 ExpiryTime:2024-07-31 21:41:31 +0000 UTC Type:0 Mac:52:54:00:49:87:3f Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:kubernetes-upgrade-519871 Clientid:01:52:54:00:49:87:3f}
	I0731 20:41:40.697655  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined IP address 192.168.72.217 and MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:40.697805  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHPort
	I0731 20:41:40.697987  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHKeyPath
	I0731 20:41:40.698184  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHKeyPath
	I0731 20:41:40.698344  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHUsername
	I0731 20:41:40.698534  169852 main.go:141] libmachine: Using SSH client type: native
	I0731 20:41:40.698775  169852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.217 22 <nil> <nil>}
	I0731 20:41:40.698792  169852 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-519871 && echo "kubernetes-upgrade-519871" | sudo tee /etc/hostname
	I0731 20:41:40.825482  169852 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-519871
	
	I0731 20:41:40.825514  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHHostname
	I0731 20:41:40.828458  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:40.828799  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:87:3f", ip: ""} in network mk-kubernetes-upgrade-519871: {Iface:virbr4 ExpiryTime:2024-07-31 21:41:31 +0000 UTC Type:0 Mac:52:54:00:49:87:3f Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:kubernetes-upgrade-519871 Clientid:01:52:54:00:49:87:3f}
	I0731 20:41:40.828830  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined IP address 192.168.72.217 and MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:40.829023  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHPort
	I0731 20:41:40.829223  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHKeyPath
	I0731 20:41:40.829446  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHKeyPath
	I0731 20:41:40.829605  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHUsername
	I0731 20:41:40.829800  169852 main.go:141] libmachine: Using SSH client type: native
	I0731 20:41:40.830012  169852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.217 22 <nil> <nil>}
	I0731 20:41:40.830029  169852 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-519871' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-519871/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-519871' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:41:40.951060  169852 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:41:40.951097  169852 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 20:41:40.951145  169852 buildroot.go:174] setting up certificates
	I0731 20:41:40.951163  169852 provision.go:84] configureAuth start
	I0731 20:41:40.951181  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetMachineName
	I0731 20:41:40.951487  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetIP
	I0731 20:41:40.954104  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:40.954517  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:87:3f", ip: ""} in network mk-kubernetes-upgrade-519871: {Iface:virbr4 ExpiryTime:2024-07-31 21:41:31 +0000 UTC Type:0 Mac:52:54:00:49:87:3f Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:kubernetes-upgrade-519871 Clientid:01:52:54:00:49:87:3f}
	I0731 20:41:40.954547  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined IP address 192.168.72.217 and MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:40.954694  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHHostname
	I0731 20:41:40.956837  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:40.957268  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:87:3f", ip: ""} in network mk-kubernetes-upgrade-519871: {Iface:virbr4 ExpiryTime:2024-07-31 21:41:31 +0000 UTC Type:0 Mac:52:54:00:49:87:3f Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:kubernetes-upgrade-519871 Clientid:01:52:54:00:49:87:3f}
	I0731 20:41:40.957303  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined IP address 192.168.72.217 and MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:40.957471  169852 provision.go:143] copyHostCerts
	I0731 20:41:40.957528  169852 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 20:41:40.957540  169852 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 20:41:40.957591  169852 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 20:41:40.957687  169852 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 20:41:40.957696  169852 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 20:41:40.957715  169852 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 20:41:40.957782  169852 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 20:41:40.957789  169852 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 20:41:40.957805  169852 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 20:41:40.957860  169852 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-519871 san=[127.0.0.1 192.168.72.217 kubernetes-upgrade-519871 localhost minikube]
	I0731 20:41:41.254235  169852 provision.go:177] copyRemoteCerts
	I0731 20:41:41.254298  169852 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:41:41.254324  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHHostname
	I0731 20:41:41.256923  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:41.257195  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:87:3f", ip: ""} in network mk-kubernetes-upgrade-519871: {Iface:virbr4 ExpiryTime:2024-07-31 21:41:31 +0000 UTC Type:0 Mac:52:54:00:49:87:3f Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:kubernetes-upgrade-519871 Clientid:01:52:54:00:49:87:3f}
	I0731 20:41:41.257220  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined IP address 192.168.72.217 and MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:41.257484  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHPort
	I0731 20:41:41.257724  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHKeyPath
	I0731 20:41:41.257909  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHUsername
	I0731 20:41:41.258085  169852 sshutil.go:53] new ssh client: &{IP:192.168.72.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/kubernetes-upgrade-519871/id_rsa Username:docker}
	I0731 20:41:41.348181  169852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:41:41.376949  169852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0731 20:41:41.403169  169852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 20:41:41.429780  169852 provision.go:87] duration metric: took 478.598663ms to configureAuth
	I0731 20:41:41.429813  169852 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:41:41.430000  169852 config.go:182] Loaded profile config "kubernetes-upgrade-519871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 20:41:41.430093  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHHostname
	I0731 20:41:41.433391  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:41.433812  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:87:3f", ip: ""} in network mk-kubernetes-upgrade-519871: {Iface:virbr4 ExpiryTime:2024-07-31 21:41:31 +0000 UTC Type:0 Mac:52:54:00:49:87:3f Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:kubernetes-upgrade-519871 Clientid:01:52:54:00:49:87:3f}
	I0731 20:41:41.433845  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined IP address 192.168.72.217 and MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:41.434032  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHPort
	I0731 20:41:41.434257  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHKeyPath
	I0731 20:41:41.434473  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHKeyPath
	I0731 20:41:41.434657  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHUsername
	I0731 20:41:41.434854  169852 main.go:141] libmachine: Using SSH client type: native
	I0731 20:41:41.435083  169852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.217 22 <nil> <nil>}
	I0731 20:41:41.435104  169852 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:41:41.721387  169852 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:41:41.721431  169852 main.go:141] libmachine: Checking connection to Docker...
	I0731 20:41:41.721441  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetURL
	I0731 20:41:41.722707  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | Using libvirt version 6000000
	I0731 20:41:41.724924  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:41.725274  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:87:3f", ip: ""} in network mk-kubernetes-upgrade-519871: {Iface:virbr4 ExpiryTime:2024-07-31 21:41:31 +0000 UTC Type:0 Mac:52:54:00:49:87:3f Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:kubernetes-upgrade-519871 Clientid:01:52:54:00:49:87:3f}
	I0731 20:41:41.725318  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined IP address 192.168.72.217 and MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:41.725498  169852 main.go:141] libmachine: Docker is up and running!
	I0731 20:41:41.725513  169852 main.go:141] libmachine: Reticulating splines...
	I0731 20:41:41.725521  169852 client.go:171] duration metric: took 25.211298728s to LocalClient.Create
	I0731 20:41:41.725544  169852 start.go:167] duration metric: took 25.211366838s to libmachine.API.Create "kubernetes-upgrade-519871"
	I0731 20:41:41.725553  169852 start.go:293] postStartSetup for "kubernetes-upgrade-519871" (driver="kvm2")
	I0731 20:41:41.725563  169852 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:41:41.725579  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .DriverName
	I0731 20:41:41.725820  169852 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:41:41.725847  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHHostname
	I0731 20:41:41.728687  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:41.729072  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:87:3f", ip: ""} in network mk-kubernetes-upgrade-519871: {Iface:virbr4 ExpiryTime:2024-07-31 21:41:31 +0000 UTC Type:0 Mac:52:54:00:49:87:3f Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:kubernetes-upgrade-519871 Clientid:01:52:54:00:49:87:3f}
	I0731 20:41:41.729093  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined IP address 192.168.72.217 and MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:41.729250  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHPort
	I0731 20:41:41.729472  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHKeyPath
	I0731 20:41:41.729662  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHUsername
	I0731 20:41:41.729799  169852 sshutil.go:53] new ssh client: &{IP:192.168.72.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/kubernetes-upgrade-519871/id_rsa Username:docker}
	I0731 20:41:41.818055  169852 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:41:41.822454  169852 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:41:41.822482  169852 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 20:41:41.822566  169852 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 20:41:41.822657  169852 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 20:41:41.822774  169852 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:41:41.834488  169852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:41:41.860274  169852 start.go:296] duration metric: took 134.705814ms for postStartSetup
	I0731 20:41:41.860357  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetConfigRaw
	I0731 20:41:41.861051  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetIP
	I0731 20:41:41.863765  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:41.864081  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:87:3f", ip: ""} in network mk-kubernetes-upgrade-519871: {Iface:virbr4 ExpiryTime:2024-07-31 21:41:31 +0000 UTC Type:0 Mac:52:54:00:49:87:3f Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:kubernetes-upgrade-519871 Clientid:01:52:54:00:49:87:3f}
	I0731 20:41:41.864113  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined IP address 192.168.72.217 and MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:41.864350  169852 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kubernetes-upgrade-519871/config.json ...
	I0731 20:41:41.864587  169852 start.go:128] duration metric: took 25.374574709s to createHost
	I0731 20:41:41.864618  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHHostname
	I0731 20:41:41.867014  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:41.867355  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:87:3f", ip: ""} in network mk-kubernetes-upgrade-519871: {Iface:virbr4 ExpiryTime:2024-07-31 21:41:31 +0000 UTC Type:0 Mac:52:54:00:49:87:3f Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:kubernetes-upgrade-519871 Clientid:01:52:54:00:49:87:3f}
	I0731 20:41:41.867383  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined IP address 192.168.72.217 and MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:41.867661  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHPort
	I0731 20:41:41.867839  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHKeyPath
	I0731 20:41:41.867988  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHKeyPath
	I0731 20:41:41.868154  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHUsername
	I0731 20:41:41.868382  169852 main.go:141] libmachine: Using SSH client type: native
	I0731 20:41:41.868590  169852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.217 22 <nil> <nil>}
	I0731 20:41:41.868605  169852 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0731 20:41:41.986222  169852 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722458501.964370076
	
	I0731 20:41:41.986244  169852 fix.go:216] guest clock: 1722458501.964370076
	I0731 20:41:41.986279  169852 fix.go:229] Guest: 2024-07-31 20:41:41.964370076 +0000 UTC Remote: 2024-07-31 20:41:41.864602702 +0000 UTC m=+50.978843758 (delta=99.767374ms)
	I0731 20:41:41.986331  169852 fix.go:200] guest clock delta is within tolerance: 99.767374ms
	I0731 20:41:41.986348  169852 start.go:83] releasing machines lock for "kubernetes-upgrade-519871", held for 25.496470787s
	I0731 20:41:41.986389  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .DriverName
	I0731 20:41:41.986693  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetIP
	I0731 20:41:41.989674  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:41.990117  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:87:3f", ip: ""} in network mk-kubernetes-upgrade-519871: {Iface:virbr4 ExpiryTime:2024-07-31 21:41:31 +0000 UTC Type:0 Mac:52:54:00:49:87:3f Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:kubernetes-upgrade-519871 Clientid:01:52:54:00:49:87:3f}
	I0731 20:41:41.990150  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined IP address 192.168.72.217 and MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:41.990332  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .DriverName
	I0731 20:41:41.990969  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .DriverName
	I0731 20:41:41.991191  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .DriverName
	I0731 20:41:41.991280  169852 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:41:41.991322  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHHostname
	I0731 20:41:41.991633  169852 ssh_runner.go:195] Run: cat /version.json
	I0731 20:41:41.991660  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHHostname
	I0731 20:41:41.994403  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:41.994814  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:41.994882  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:87:3f", ip: ""} in network mk-kubernetes-upgrade-519871: {Iface:virbr4 ExpiryTime:2024-07-31 21:41:31 +0000 UTC Type:0 Mac:52:54:00:49:87:3f Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:kubernetes-upgrade-519871 Clientid:01:52:54:00:49:87:3f}
	I0731 20:41:41.994907  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined IP address 192.168.72.217 and MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:41.995005  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHPort
	I0731 20:41:41.995237  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHKeyPath
	I0731 20:41:41.995351  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:87:3f", ip: ""} in network mk-kubernetes-upgrade-519871: {Iface:virbr4 ExpiryTime:2024-07-31 21:41:31 +0000 UTC Type:0 Mac:52:54:00:49:87:3f Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:kubernetes-upgrade-519871 Clientid:01:52:54:00:49:87:3f}
	I0731 20:41:41.995378  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined IP address 192.168.72.217 and MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:41.995420  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHUsername
	I0731 20:41:41.995582  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHPort
	I0731 20:41:41.995574  169852 sshutil.go:53] new ssh client: &{IP:192.168.72.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/kubernetes-upgrade-519871/id_rsa Username:docker}
	I0731 20:41:41.995751  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHKeyPath
	I0731 20:41:41.995887  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetSSHUsername
	I0731 20:41:41.996089  169852 sshutil.go:53] new ssh client: &{IP:192.168.72.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/kubernetes-upgrade-519871/id_rsa Username:docker}
	I0731 20:41:42.110072  169852 ssh_runner.go:195] Run: systemctl --version
	I0731 20:41:42.119109  169852 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:41:42.297843  169852 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:41:42.304370  169852 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:41:42.304479  169852 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:41:42.324323  169852 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:41:42.324352  169852 start.go:495] detecting cgroup driver to use...
	I0731 20:41:42.324421  169852 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:41:42.348826  169852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:41:42.368320  169852 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:41:42.368390  169852 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:41:42.387803  169852 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:41:42.409676  169852 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:41:42.570617  169852 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:41:42.772672  169852 docker.go:233] disabling docker service ...
	I0731 20:41:42.772739  169852 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:41:42.797986  169852 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:41:42.817140  169852 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:41:42.951492  169852 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:41:43.100479  169852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:41:43.115612  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:41:43.137158  169852 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 20:41:43.137222  169852 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:41:43.154189  169852 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:41:43.154268  169852 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:41:43.170421  169852 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:41:43.187134  169852 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:41:43.202844  169852 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:41:43.216569  169852 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:41:43.230038  169852 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:41:43.230105  169852 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:41:43.245916  169852 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:41:43.257722  169852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:41:43.384993  169852 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:41:43.571817  169852 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:41:43.571904  169852 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:41:43.578490  169852 start.go:563] Will wait 60s for crictl version
	I0731 20:41:43.578561  169852 ssh_runner.go:195] Run: which crictl
	I0731 20:41:43.583546  169852 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:41:43.635452  169852 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:41:43.635572  169852 ssh_runner.go:195] Run: crio --version
	I0731 20:41:43.670478  169852 ssh_runner.go:195] Run: crio --version
	I0731 20:41:43.705858  169852 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0731 20:41:43.707276  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) Calling .GetIP
	I0731 20:41:43.710573  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:43.711028  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:87:3f", ip: ""} in network mk-kubernetes-upgrade-519871: {Iface:virbr4 ExpiryTime:2024-07-31 21:41:31 +0000 UTC Type:0 Mac:52:54:00:49:87:3f Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:kubernetes-upgrade-519871 Clientid:01:52:54:00:49:87:3f}
	I0731 20:41:43.711060  169852 main.go:141] libmachine: (kubernetes-upgrade-519871) DBG | domain kubernetes-upgrade-519871 has defined IP address 192.168.72.217 and MAC address 52:54:00:49:87:3f in network mk-kubernetes-upgrade-519871
	I0731 20:41:43.711333  169852 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0731 20:41:43.716061  169852 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:41:43.731221  169852 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-519871 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-519871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.217 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:41:43.731370  169852 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 20:41:43.731436  169852 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:41:43.770991  169852 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 20:41:43.771074  169852 ssh_runner.go:195] Run: which lz4
	I0731 20:41:43.776719  169852 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0731 20:41:43.781565  169852 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 20:41:43.781612  169852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0731 20:41:45.625296  169852 crio.go:462] duration metric: took 1.848625777s to copy over tarball
	I0731 20:41:45.625395  169852 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 20:41:48.206526  169852 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.581092555s)
	I0731 20:41:48.206561  169852 crio.go:469] duration metric: took 2.581234224s to extract the tarball
	I0731 20:41:48.206583  169852 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 20:41:48.251750  169852 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:41:48.299509  169852 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 20:41:48.299538  169852 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 20:41:48.299590  169852 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:41:48.299627  169852 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:41:48.299675  169852 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0731 20:41:48.299706  169852 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0731 20:41:48.299678  169852 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:41:48.299670  169852 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0731 20:41:48.299744  169852 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:41:48.299757  169852 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:41:48.301679  169852 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:41:48.301790  169852 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:41:48.301887  169852 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0731 20:41:48.302062  169852 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:41:48.302185  169852 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:41:48.302332  169852 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 20:41:48.302430  169852 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:41:48.302512  169852 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0731 20:41:48.449682  169852 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:41:48.462647  169852 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:41:48.489468  169852 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:41:48.490856  169852 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0731 20:41:48.500983  169852 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:41:48.503005  169852 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0731 20:41:48.505061  169852 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0731 20:41:48.535141  169852 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0731 20:41:48.535187  169852 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:41:48.535237  169852 ssh_runner.go:195] Run: which crictl
	I0731 20:41:48.567950  169852 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0731 20:41:48.568001  169852 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:41:48.568059  169852 ssh_runner.go:195] Run: which crictl
	I0731 20:41:48.627734  169852 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0731 20:41:48.627784  169852 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:41:48.627842  169852 ssh_runner.go:195] Run: which crictl
	I0731 20:41:48.666689  169852 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0731 20:41:48.666765  169852 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0731 20:41:48.666764  169852 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0731 20:41:48.666848  169852 ssh_runner.go:195] Run: which crictl
	I0731 20:41:48.666859  169852 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0731 20:41:48.666879  169852 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:41:48.666893  169852 ssh_runner.go:195] Run: which crictl
	I0731 20:41:48.666701  169852 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0731 20:41:48.666934  169852 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:41:48.666955  169852 ssh_runner.go:195] Run: which crictl
	I0731 20:41:48.666881  169852 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:41:48.666969  169852 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:41:48.666821  169852 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0731 20:41:48.667008  169852 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0731 20:41:48.667056  169852 ssh_runner.go:195] Run: which crictl
	I0731 20:41:48.729391  169852 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0731 20:41:48.729549  169852 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0731 20:41:48.743831  169852 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 20:41:48.743845  169852 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0731 20:41:48.756810  169852 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0731 20:41:48.756836  169852 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:41:48.756955  169852 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0731 20:41:48.811175  169852 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0731 20:41:48.835046  169852 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0731 20:41:48.839423  169852 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0731 20:41:48.839524  169852 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0731 20:41:49.180504  169852 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:41:49.327793  169852 cache_images.go:92] duration metric: took 1.028237879s to LoadCachedImages
	W0731 20:41:49.327889  169852 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0731 20:41:49.327907  169852 kubeadm.go:934] updating node { 192.168.72.217 8443 v1.20.0 crio true true} ...
	I0731 20:41:49.328049  169852 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-519871 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-519871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:41:49.328155  169852 ssh_runner.go:195] Run: crio config
	I0731 20:41:49.376695  169852 cni.go:84] Creating CNI manager for ""
	I0731 20:41:49.376720  169852 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:41:49.376733  169852 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:41:49.376758  169852 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.217 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-519871 NodeName:kubernetes-upgrade-519871 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 20:41:49.376929  169852 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-519871"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.217
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:41:49.376994  169852 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0731 20:41:49.387895  169852 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:41:49.387952  169852 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 20:41:49.398869  169852 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0731 20:41:49.415967  169852 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:41:49.434015  169852 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0731 20:41:49.451750  169852 ssh_runner.go:195] Run: grep 192.168.72.217	control-plane.minikube.internal$ /etc/hosts
	I0731 20:41:49.455638  169852 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.217	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:41:49.468445  169852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:41:49.593007  169852 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:41:49.610302  169852 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kubernetes-upgrade-519871 for IP: 192.168.72.217
	I0731 20:41:49.610329  169852 certs.go:194] generating shared ca certs ...
	I0731 20:41:49.610352  169852 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:41:49.610572  169852 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 20:41:49.610635  169852 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 20:41:49.610649  169852 certs.go:256] generating profile certs ...
	I0731 20:41:49.610721  169852 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kubernetes-upgrade-519871/client.key
	I0731 20:41:49.610744  169852 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kubernetes-upgrade-519871/client.crt with IP's: []
	I0731 20:41:49.792860  169852 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kubernetes-upgrade-519871/client.crt ...
	I0731 20:41:49.792897  169852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kubernetes-upgrade-519871/client.crt: {Name:mk85ff841cad8180a5218ceb28bb5d98caaeb354 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:41:49.793094  169852 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kubernetes-upgrade-519871/client.key ...
	I0731 20:41:49.793117  169852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kubernetes-upgrade-519871/client.key: {Name:mk5ff760f7686daab764590b7ddc3c7bc33b86a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:41:49.793233  169852 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kubernetes-upgrade-519871/apiserver.key.5f79653b
	I0731 20:41:49.793260  169852 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kubernetes-upgrade-519871/apiserver.crt.5f79653b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.217]
	I0731 20:41:50.216918  169852 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kubernetes-upgrade-519871/apiserver.crt.5f79653b ...
	I0731 20:41:50.216952  169852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kubernetes-upgrade-519871/apiserver.crt.5f79653b: {Name:mk2c52b706aeb6f9d25af756881a174369911374 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:41:50.217117  169852 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kubernetes-upgrade-519871/apiserver.key.5f79653b ...
	I0731 20:41:50.217132  169852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kubernetes-upgrade-519871/apiserver.key.5f79653b: {Name:mk85b24981ec6218c6032e3a04deae9e66efb89a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:41:50.217201  169852 certs.go:381] copying /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kubernetes-upgrade-519871/apiserver.crt.5f79653b -> /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kubernetes-upgrade-519871/apiserver.crt
	I0731 20:41:50.217270  169852 certs.go:385] copying /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kubernetes-upgrade-519871/apiserver.key.5f79653b -> /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kubernetes-upgrade-519871/apiserver.key
	I0731 20:41:50.217321  169852 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kubernetes-upgrade-519871/proxy-client.key
	I0731 20:41:50.217353  169852 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kubernetes-upgrade-519871/proxy-client.crt with IP's: []
	I0731 20:41:50.331651  169852 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kubernetes-upgrade-519871/proxy-client.crt ...
	I0731 20:41:50.331681  169852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kubernetes-upgrade-519871/proxy-client.crt: {Name:mkdd2a3f34fc94290e77d42f920f22d80225f299 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:41:50.331839  169852 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kubernetes-upgrade-519871/proxy-client.key ...
	I0731 20:41:50.331851  169852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kubernetes-upgrade-519871/proxy-client.key: {Name:mk678b4b0950e7c6a842f992d074309811b3e66a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:41:50.332030  169852 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 20:41:50.332067  169852 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 20:41:50.332076  169852 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 20:41:50.332099  169852 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:41:50.332121  169852 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:41:50.332142  169852 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 20:41:50.332181  169852 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:41:50.332809  169852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:41:50.359905  169852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:41:50.384418  169852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:41:50.409403  169852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 20:41:50.434302  169852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kubernetes-upgrade-519871/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0731 20:41:50.458743  169852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kubernetes-upgrade-519871/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 20:41:50.483396  169852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kubernetes-upgrade-519871/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:41:50.507334  169852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kubernetes-upgrade-519871/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 20:41:50.536955  169852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:41:50.566037  169852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 20:41:50.592888  169852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 20:41:50.616852  169852 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:41:50.634621  169852 ssh_runner.go:195] Run: openssl version
	I0731 20:41:50.640604  169852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:41:50.655713  169852 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:41:50.661534  169852 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:41:50.661596  169852 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:41:50.669358  169852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:41:50.683727  169852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 20:41:50.696274  169852 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 20:41:50.700933  169852 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 20:41:50.700997  169852 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 20:41:50.708286  169852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 20:41:50.720976  169852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 20:41:50.735284  169852 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 20:41:50.741566  169852 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 20:41:50.741631  169852 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 20:41:50.749480  169852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:41:50.761827  169852 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:41:50.766011  169852 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 20:41:50.766067  169852 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-519871 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-519871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.217 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:41:50.766160  169852 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:41:50.766246  169852 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:41:50.815350  169852 cri.go:89] found id: ""
	I0731 20:41:50.815433  169852 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 20:41:50.826967  169852 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 20:41:50.837683  169852 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:41:50.848577  169852 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:41:50.848597  169852 kubeadm.go:157] found existing configuration files:
	
	I0731 20:41:50.848638  169852 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 20:41:50.859093  169852 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:41:50.859153  169852 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:41:50.870336  169852 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 20:41:50.880419  169852 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:41:50.880472  169852 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:41:50.890867  169852 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 20:41:50.900686  169852 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:41:50.900741  169852 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:41:50.910562  169852 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 20:41:50.920769  169852 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:41:50.920831  169852 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:41:50.931946  169852 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 20:41:51.071950  169852 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 20:41:51.072208  169852 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 20:41:51.259587  169852 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 20:41:51.259732  169852 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 20:41:51.259862  169852 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 20:41:51.502136  169852 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 20:41:51.551882  169852 out.go:204]   - Generating certificates and keys ...
	I0731 20:41:51.552008  169852 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 20:41:51.552119  169852 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 20:41:51.739259  169852 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 20:41:52.105478  169852 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 20:41:52.649503  169852 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 20:41:52.924122  169852 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 20:41:53.450094  169852 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 20:41:53.450275  169852 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-519871 localhost] and IPs [192.168.72.217 127.0.0.1 ::1]
	I0731 20:41:53.798482  169852 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 20:41:53.798701  169852 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-519871 localhost] and IPs [192.168.72.217 127.0.0.1 ::1]
	I0731 20:41:53.952858  169852 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 20:41:54.275015  169852 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 20:41:54.402017  169852 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 20:41:54.402362  169852 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 20:41:54.509007  169852 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 20:41:54.651524  169852 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 20:41:54.776908  169852 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 20:41:55.155266  169852 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 20:41:55.173645  169852 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 20:41:55.174662  169852 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 20:41:55.174737  169852 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 20:41:55.319011  169852 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 20:41:55.320816  169852 out.go:204]   - Booting up control plane ...
	I0731 20:41:55.320954  169852 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 20:41:55.329539  169852 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 20:41:55.330545  169852 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 20:41:55.331309  169852 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 20:41:55.336030  169852 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 20:42:35.332364  169852 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 20:42:35.333236  169852 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 20:42:35.333539  169852 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 20:42:40.333430  169852 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 20:42:40.333659  169852 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 20:42:50.332616  169852 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 20:42:50.332877  169852 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 20:43:10.332780  169852 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 20:43:10.333049  169852 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 20:43:50.335708  169852 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 20:43:50.336281  169852 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 20:43:50.336308  169852 kubeadm.go:310] 
	I0731 20:43:50.336409  169852 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 20:43:50.336498  169852 kubeadm.go:310] 		timed out waiting for the condition
	I0731 20:43:50.336510  169852 kubeadm.go:310] 
	I0731 20:43:50.336591  169852 kubeadm.go:310] 	This error is likely caused by:
	I0731 20:43:50.336674  169852 kubeadm.go:310] 		- The kubelet is not running
	I0731 20:43:50.336903  169852 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 20:43:50.336915  169852 kubeadm.go:310] 
	I0731 20:43:50.337138  169852 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 20:43:50.337208  169852 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 20:43:50.337278  169852 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 20:43:50.337290  169852 kubeadm.go:310] 
	I0731 20:43:50.337575  169852 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 20:43:50.337747  169852 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 20:43:50.337758  169852 kubeadm.go:310] 
	I0731 20:43:50.337978  169852 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 20:43:50.338223  169852 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 20:43:50.338385  169852 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 20:43:50.338532  169852 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 20:43:50.338547  169852 kubeadm.go:310] 
	I0731 20:43:50.338780  169852 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 20:43:50.339045  169852 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 20:43:50.339181  169852 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0731 20:43:50.339397  169852 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-519871 localhost] and IPs [192.168.72.217 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-519871 localhost] and IPs [192.168.72.217 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-519871 localhost] and IPs [192.168.72.217 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-519871 localhost] and IPs [192.168.72.217 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0731 20:43:50.339473  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 20:43:51.865829  169852 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.526322829s)
	I0731 20:43:51.865930  169852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:43:51.881656  169852 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:43:51.893601  169852 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:43:51.893624  169852 kubeadm.go:157] found existing configuration files:
	
	I0731 20:43:51.893683  169852 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 20:43:51.904712  169852 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:43:51.904778  169852 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:43:51.915998  169852 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 20:43:51.926576  169852 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:43:51.926643  169852 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:43:51.935995  169852 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 20:43:51.946712  169852 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:43:51.946774  169852 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:43:51.956932  169852 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 20:43:51.967741  169852 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:43:51.967830  169852 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:43:51.978179  169852 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 20:43:52.053909  169852 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 20:43:52.054012  169852 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 20:43:52.204146  169852 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 20:43:52.204303  169852 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 20:43:52.204439  169852 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 20:43:52.385599  169852 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 20:43:52.456878  169852 out.go:204]   - Generating certificates and keys ...
	I0731 20:43:52.457012  169852 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 20:43:52.457096  169852 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 20:43:52.457207  169852 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 20:43:52.457326  169852 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 20:43:52.457458  169852 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 20:43:52.457544  169852 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 20:43:52.457601  169852 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 20:43:52.457653  169852 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 20:43:52.457722  169852 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 20:43:52.457794  169852 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 20:43:52.457830  169852 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 20:43:52.457915  169852 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 20:43:52.615122  169852 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 20:43:52.807627  169852 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 20:43:52.920839  169852 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 20:43:53.084115  169852 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 20:43:53.099444  169852 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 20:43:53.100497  169852 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 20:43:53.100563  169852 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 20:43:53.277532  169852 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 20:43:53.321059  169852 out.go:204]   - Booting up control plane ...
	I0731 20:43:53.321210  169852 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 20:43:53.321321  169852 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 20:43:53.321449  169852 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 20:43:53.321554  169852 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 20:43:53.321783  169852 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 20:44:33.305401  169852 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 20:44:33.305824  169852 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 20:44:33.306044  169852 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 20:44:38.306760  169852 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 20:44:38.307029  169852 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 20:44:48.307215  169852 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 20:44:48.307492  169852 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 20:45:08.306662  169852 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 20:45:08.306929  169852 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 20:45:48.306445  169852 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 20:45:48.306694  169852 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 20:45:48.306717  169852 kubeadm.go:310] 
	I0731 20:45:48.306777  169852 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 20:45:48.306827  169852 kubeadm.go:310] 		timed out waiting for the condition
	I0731 20:45:48.306836  169852 kubeadm.go:310] 
	I0731 20:45:48.306877  169852 kubeadm.go:310] 	This error is likely caused by:
	I0731 20:45:48.306924  169852 kubeadm.go:310] 		- The kubelet is not running
	I0731 20:45:48.307052  169852 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 20:45:48.307064  169852 kubeadm.go:310] 
	I0731 20:45:48.307155  169852 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 20:45:48.307192  169852 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 20:45:48.307220  169852 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 20:45:48.307226  169852 kubeadm.go:310] 
	I0731 20:45:48.307306  169852 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 20:45:48.307376  169852 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 20:45:48.307382  169852 kubeadm.go:310] 
	I0731 20:45:48.307474  169852 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 20:45:48.307544  169852 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 20:45:48.307646  169852 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 20:45:48.307751  169852 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 20:45:48.307760  169852 kubeadm.go:310] 
	I0731 20:45:48.308544  169852 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 20:45:48.308668  169852 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 20:45:48.308755  169852 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 20:45:48.308826  169852 kubeadm.go:394] duration metric: took 3m57.542762923s to StartCluster
	I0731 20:45:48.308896  169852 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 20:45:48.308971  169852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 20:45:48.362883  169852 cri.go:89] found id: ""
	I0731 20:45:48.362914  169852 logs.go:276] 0 containers: []
	W0731 20:45:48.362926  169852 logs.go:278] No container was found matching "kube-apiserver"
	I0731 20:45:48.362933  169852 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 20:45:48.362994  169852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 20:45:48.402215  169852 cri.go:89] found id: ""
	I0731 20:45:48.402243  169852 logs.go:276] 0 containers: []
	W0731 20:45:48.402251  169852 logs.go:278] No container was found matching "etcd"
	I0731 20:45:48.402258  169852 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 20:45:48.402316  169852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 20:45:48.446238  169852 cri.go:89] found id: ""
	I0731 20:45:48.446265  169852 logs.go:276] 0 containers: []
	W0731 20:45:48.446274  169852 logs.go:278] No container was found matching "coredns"
	I0731 20:45:48.446281  169852 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 20:45:48.446343  169852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 20:45:48.487864  169852 cri.go:89] found id: ""
	I0731 20:45:48.487915  169852 logs.go:276] 0 containers: []
	W0731 20:45:48.487927  169852 logs.go:278] No container was found matching "kube-scheduler"
	I0731 20:45:48.487935  169852 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 20:45:48.488016  169852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 20:45:48.527302  169852 cri.go:89] found id: ""
	I0731 20:45:48.527339  169852 logs.go:276] 0 containers: []
	W0731 20:45:48.527356  169852 logs.go:278] No container was found matching "kube-proxy"
	I0731 20:45:48.527365  169852 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 20:45:48.527441  169852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 20:45:48.586524  169852 cri.go:89] found id: ""
	I0731 20:45:48.586554  169852 logs.go:276] 0 containers: []
	W0731 20:45:48.586566  169852 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 20:45:48.586575  169852 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 20:45:48.586645  169852 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 20:45:48.622643  169852 cri.go:89] found id: ""
	I0731 20:45:48.622681  169852 logs.go:276] 0 containers: []
	W0731 20:45:48.622691  169852 logs.go:278] No container was found matching "kindnet"
	I0731 20:45:48.622701  169852 logs.go:123] Gathering logs for dmesg ...
	I0731 20:45:48.622715  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 20:45:48.637147  169852 logs.go:123] Gathering logs for describe nodes ...
	I0731 20:45:48.637179  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 20:45:48.766027  169852 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 20:45:48.766050  169852 logs.go:123] Gathering logs for CRI-O ...
	I0731 20:45:48.766066  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 20:45:48.873832  169852 logs.go:123] Gathering logs for container status ...
	I0731 20:45:48.873926  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 20:45:48.914551  169852 logs.go:123] Gathering logs for kubelet ...
	I0731 20:45:48.914584  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 20:45:48.967546  169852 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0731 20:45:48.967595  169852 out.go:239] * 
	* 
	W0731 20:45:48.967652  169852 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 20:45:48.967671  169852 out.go:239] * 
	* 
	W0731 20:45:48.968450  169852 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 20:45:48.971737  169852 out.go:177] 
	W0731 20:45:48.972923  169852 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 20:45:48.972972  169852 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0731 20:45:48.972993  169852 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0731 20:45:48.974409  169852 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-519871 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-519871
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-519871: (6.322899234s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-519871 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-519871 status --format={{.Host}}: exit status 7 (69.653757ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-519871 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-519871 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m17.394985286s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-519871 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-519871 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-519871 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (90.407473ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-519871] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19355
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-519871
	    minikube start -p kubernetes-upgrade-519871 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5198712 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-519871 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-519871 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-519871 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.493070818s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-07-31 20:47:59.480986897 +0000 UTC m=+4851.731344720
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-519871 -n kubernetes-upgrade-519871
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-519871 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-519871 logs -n 25: (1.746576157s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-341849 sudo                        | custom-flannel-341849 | jenkins | v1.33.1 | 31 Jul 24 20:47 UTC | 31 Jul 24 20:47 UTC |
	|         | systemctl status kubelet --all                       |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-341849                             | custom-flannel-341849 | jenkins | v1.33.1 | 31 Jul 24 20:47 UTC | 31 Jul 24 20:47 UTC |
	|         | sudo systemctl cat kubelet                           |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-341849 sudo                        | custom-flannel-341849 | jenkins | v1.33.1 | 31 Jul 24 20:47 UTC | 31 Jul 24 20:47 UTC |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-341849                             | custom-flannel-341849 | jenkins | v1.33.1 | 31 Jul 24 20:47 UTC | 31 Jul 24 20:47 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-341849                             | custom-flannel-341849 | jenkins | v1.33.1 | 31 Jul 24 20:47 UTC | 31 Jul 24 20:47 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-341849 sudo                        | custom-flannel-341849 | jenkins | v1.33.1 | 31 Jul 24 20:47 UTC |                     |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-341849                             | custom-flannel-341849 | jenkins | v1.33.1 | 31 Jul 24 20:47 UTC | 31 Jul 24 20:47 UTC |
	|         | sudo systemctl cat docker                            |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-341849 sudo                        | custom-flannel-341849 | jenkins | v1.33.1 | 31 Jul 24 20:47 UTC | 31 Jul 24 20:47 UTC |
	|         | cat /etc/docker/daemon.json                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-341849 sudo                        | custom-flannel-341849 | jenkins | v1.33.1 | 31 Jul 24 20:47 UTC |                     |
	|         | docker system info                                   |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-341849 sudo                        | custom-flannel-341849 | jenkins | v1.33.1 | 31 Jul 24 20:47 UTC |                     |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-341849                             | custom-flannel-341849 | jenkins | v1.33.1 | 31 Jul 24 20:47 UTC | 31 Jul 24 20:47 UTC |
	|         | sudo systemctl cat cri-docker                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-341849 sudo cat                    | custom-flannel-341849 | jenkins | v1.33.1 | 31 Jul 24 20:47 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-341849 sudo cat                    | custom-flannel-341849 | jenkins | v1.33.1 | 31 Jul 24 20:47 UTC | 31 Jul 24 20:47 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-341849 sudo                        | custom-flannel-341849 | jenkins | v1.33.1 | 31 Jul 24 20:47 UTC | 31 Jul 24 20:47 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-341849 sudo                        | custom-flannel-341849 | jenkins | v1.33.1 | 31 Jul 24 20:47 UTC |                     |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-341849                             | custom-flannel-341849 | jenkins | v1.33.1 | 31 Jul 24 20:47 UTC | 31 Jul 24 20:47 UTC |
	|         | sudo systemctl cat containerd                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-341849 sudo cat                    | custom-flannel-341849 | jenkins | v1.33.1 | 31 Jul 24 20:47 UTC | 31 Jul 24 20:47 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-341849                             | custom-flannel-341849 | jenkins | v1.33.1 | 31 Jul 24 20:47 UTC | 31 Jul 24 20:47 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-341849 sudo                        | custom-flannel-341849 | jenkins | v1.33.1 | 31 Jul 24 20:47 UTC | 31 Jul 24 20:47 UTC |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-341849 sudo                        | custom-flannel-341849 | jenkins | v1.33.1 | 31 Jul 24 20:47 UTC | 31 Jul 24 20:47 UTC |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-341849 sudo                        | custom-flannel-341849 | jenkins | v1.33.1 | 31 Jul 24 20:47 UTC | 31 Jul 24 20:47 UTC |
	|         | systemctl cat crio --no-pager                        |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-341849 sudo                        | custom-flannel-341849 | jenkins | v1.33.1 | 31 Jul 24 20:47 UTC | 31 Jul 24 20:47 UTC |
	|         | find /etc/crio -type f -exec                         |                       |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-341849 sudo                        | custom-flannel-341849 | jenkins | v1.33.1 | 31 Jul 24 20:47 UTC | 31 Jul 24 20:47 UTC |
	|         | crio config                                          |                       |         |         |                     |                     |
	| delete  | -p custom-flannel-341849                             | custom-flannel-341849 | jenkins | v1.33.1 | 31 Jul 24 20:47 UTC | 31 Jul 24 20:48 UTC |
	| start   | -p bridge-341849 --memory=3072                       | bridge-341849         | jenkins | v1.33.1 | 31 Jul 24 20:48 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                       |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                       |         |         |                     |                     |
	|         | --cni=bridge --driver=kvm2                           |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 20:48:00
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 20:48:00.185893  180656 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:48:00.186187  180656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:48:00.186197  180656 out.go:304] Setting ErrFile to fd 2...
	I0731 20:48:00.186204  180656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:48:00.186423  180656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 20:48:00.187100  180656 out.go:298] Setting JSON to false
	I0731 20:48:00.188460  180656 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9016,"bootTime":1722449864,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 20:48:00.188545  180656 start.go:139] virtualization: kvm guest
	I0731 20:48:00.190935  180656 out.go:177] * [bridge-341849] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 20:48:00.192863  180656 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 20:48:00.192872  180656 notify.go:220] Checking for updates...
	I0731 20:48:00.194786  180656 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 20:48:00.196667  180656 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 20:48:00.198067  180656 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 20:48:00.199478  180656 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 20:48:00.200956  180656 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	
	
	==> CRI-O <==
	Jul 31 20:48:00 kubernetes-upgrade-519871 crio[2350]: time="2024-07-31 20:48:00.647648757Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=39adb69c-9c33-43fb-a948-1b32fa7d77a0 name=/runtime.v1.RuntimeService/Version
	Jul 31 20:48:00 kubernetes-upgrade-519871 crio[2350]: time="2024-07-31 20:48:00.654948732Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a896e563-3d5b-4730-8f87-67bacc6ea552 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:48:00 kubernetes-upgrade-519871 crio[2350]: time="2024-07-31 20:48:00.655378824Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722458880655339663,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a896e563-3d5b-4730-8f87-67bacc6ea552 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:48:00 kubernetes-upgrade-519871 crio[2350]: time="2024-07-31 20:48:00.656057233Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b87a473-74ab-46d1-a16b-be3d7ecb371f name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:48:00 kubernetes-upgrade-519871 crio[2350]: time="2024-07-31 20:48:00.656113844Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5b87a473-74ab-46d1-a16b-be3d7ecb371f name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:48:00 kubernetes-upgrade-519871 crio[2350]: time="2024-07-31 20:48:00.656723706Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85e488ea82c19028139636b16e8d40f29567983a0874220266b7987519c9a47a,PodSandboxId:73a3238b486fd9d99d2e8b2231f93c20e10f762ea784ef4b934805fe802d9bc7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722458876671928369,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z67gm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5566d8d4-c936-4849-873d-d506b2428e2a,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98c0058332a909b6ab08f67e46f120dded092517137675e6091182dabc0af71e,PodSandboxId:e0b6833195f5bf4f3bd593a81aba65a9883080aef3f70ed6334e6f4c037946b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722458876649937340,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 211cb9b4-1386-4ea6-a742-761b64b0de1e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ac13a0e24192f276cefb9eaa1f3878fbde51730f971e4a52acf2f97c2281061,PodSandboxId:f019f70fb881ab314f021b511e592b031be7e39eadf5163f46f9f24bffa34e0c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722458871928815052,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 667111f8934afeba41c84a25a3d24799,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddc1bed309534fa5ba5b2f2cc9b94a1bc6947eea73b27b94844e6e7c9dd0d0fc,PodSandboxId:71f2617883d1d3546a6af869b4be376681ba662c348d850e761cb28d4a7fc80c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722458871919968064,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9043c47b717400d740420554a4e7f23d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db47822bfa954cad0c43f82ff343f7a77749d95d7cf9fec0ad588bbd60c2716e,PodSandboxId:0f026a23f44a408bd8e2eb9ae65bdbdbcc6345cadc6bc01e0ee7731228a3f3ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722458871891269942,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17ae136b412dddbae2487c452afb7881,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3717159113f36a551d9b1bed8852e957d5d59a8ff8bb6245428221d955c10917,PodSandboxId:0807ee45609bf002b50e6f825101b4e8c35b3827232b322d133d815a5cc2c145,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722458871901289223,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c659cf2bec0f380d016377b15722e0fa,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c672fdcddb69a43385430ddca6f4bb5510364c35b7db37943aa9e3101323021,PodSandboxId:46ab02ca6a57058cc937fffec0f279ca328e8ce22a3edf98426ab3268649e837,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722458867185344562,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-5vd6c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e20e580-9626-4f5c-ba3c-e7afd42dd316,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5caa046fdfd032d8f1f604a94f1a2abf45e6bcc79da96a5f77f995ef34eb927,PodSandboxId:c9035083b93498f3a430a1e5ddcffb193d2ac730b46ae73b845a138fff95ad39,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722458856372287716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2pjk5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70098609-a6f4-4ddc-9279-953658a29d44,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a76274d8a3cdc97a5db3913793cf8850c4259716863899934322ba9a37e68d,PodSandboxId:46ab02ca6a57058cc937fffec0f279ca328e8ce22a3edf98426ab3268649e837,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722458856232407684,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-5vd6c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e20e580-9626-4f5c-ba3c-e7afd42dd316,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a428c65e5a6c716dca3785dc3886dcb23f8dd5916870bd139c1a37fb02797dce,PodSandboxId:0807ee45609bf002b50e6f825101b4e8c35b3827232b322d133d815a5cc2c145,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722458855170848376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c659cf2bec0f380d016377b15722e0fa,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f5ef507a419f0f76b80ef7eb59295049eabf256b84df0a07e8eba3da91d6ae8,PodSandboxId:f019f70fb881ab314f021b511e592b031be7e39eadf5163f46f9f24bffa34e0c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722458855215193301,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 667111f8934afeba41c84a25a3d24799,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4293ee6c2b653485b8fa4ee909cc4508f6a9368f4c2f8a05c8361b39035991f,PodSandboxId:71f2617883d1d3546a6af869b4be376681ba662c348d850e761cb28d4a7fc80c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722458855139741890,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9043c47b717400d740420554a4e7f23d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3e2bd761ef798923456da2e96fb21ccad273c8890e7bce32b7f0e2163308275,PodSandboxId:e0b6833195f5bf4f3bd593a81aba65a9883080aef3f70ed6334e6f4c037946b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:ma
p[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722458855027292061,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 211cb9b4-1386-4ea6-a742-761b64b0de1e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:012a445eae0656df3bc10043ab44565307ae6fa7133bb8e737d163e5c8e701e7,PodSandboxId:0f026a23f44a408bd8e2eb9ae65bdbdbcc6345cadc6bc01e0ee7731228a3f3ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722458855094993028,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17ae136b412dddbae2487c452afb7881,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d42fde9dab2aaacb0a2914818cfc1804ac24447e68a62fcc5a0bb92af220b1b1,PodSandboxId:73a3238b486fd9d99d2e8b2231f93c20e10f762ea784ef4b934805fe802d9bc7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722458854812430939,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z67gm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5566d8d4-c936-4849-873d-d506b2428e2a,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05ff373824206516de8fe5897a272df3af894784337332151383ab243275cdb4,PodSandboxId:0b443bacbbfe3b945b0c0d315b2f9903a82572a26deb8b9d1bd93f7c85b83e84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a6
74fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722458837574374450,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2pjk5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70098609-a6f4-4ddc-9279-953658a29d44,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5b87a473-74ab-46d1-a16b-be3d7ecb371f name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:48:00 kubernetes-upgrade-519871 crio[2350]: time="2024-07-31 20:48:00.688536312Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=938c6854-4d07-44a0-95d4-8804d409d674 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 20:48:00 kubernetes-upgrade-519871 crio[2350]: time="2024-07-31 20:48:00.688892775Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:46ab02ca6a57058cc937fffec0f279ca328e8ce22a3edf98426ab3268649e837,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-5vd6c,Uid:5e20e580-9626-4f5c-ba3c-e7afd42dd316,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722458854819310849,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-5vd6c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e20e580-9626-4f5c-ba3c-e7afd42dd316,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T20:47:16.863857082Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c9035083b93498f3a430a1e5ddcffb193d2ac730b46ae73b845a138fff95ad39,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-2pjk5,Uid:70098609-a6f4-4ddc-9279-953658a29d44,Namespac
e:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722458854773922837,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-2pjk5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70098609-a6f4-4ddc-9279-953658a29d44,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T20:47:16.856684029Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:71f2617883d1d3546a6af869b4be376681ba662c348d850e761cb28d4a7fc80c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-519871,Uid:9043c47b717400d740420554a4e7f23d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722458854553587423,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9043c47b717400d740420554a4e7f23d,tier: control-plane,},Ann
otations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.217:8443,kubernetes.io/config.hash: 9043c47b717400d740420554a4e7f23d,kubernetes.io/config.seen: 2024-07-31T20:47:03.187455547Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f019f70fb881ab314f021b511e592b031be7e39eadf5163f46f9f24bffa34e0c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-519871,Uid:667111f8934afeba41c84a25a3d24799,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722458854516334827,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 667111f8934afeba41c84a25a3d24799,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 667111f8934afeba41c84a25a3d24799,kubernetes.io/config.seen: 2024-07-31T20:47:03.187459032Z,kubernetes.io/config
.source: file,},RuntimeHandler:,},&PodSandbox{Id:0807ee45609bf002b50e6f825101b4e8c35b3827232b322d133d815a5cc2c145,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-519871,Uid:c659cf2bec0f380d016377b15722e0fa,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722458854495405514,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c659cf2bec0f380d016377b15722e0fa,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c659cf2bec0f380d016377b15722e0fa,kubernetes.io/config.seen: 2024-07-31T20:47:03.187460120Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0f026a23f44a408bd8e2eb9ae65bdbdbcc6345cadc6bc01e0ee7731228a3f3ff,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-519871,Uid:17ae136b412dddbae2487c452afb7881,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,Created
At:1722458854466143705,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17ae136b412dddbae2487c452afb7881,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.217:2379,kubernetes.io/config.hash: 17ae136b412dddbae2487c452afb7881,kubernetes.io/config.seen: 2024-07-31T20:47:03.282695012Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e0b6833195f5bf4f3bd593a81aba65a9883080aef3f70ed6334e6f4c037946b9,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:211cb9b4-1386-4ea6-a742-761b64b0de1e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722458854330600392,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 211cb9b4-1386-4ea6-a742-761b64b0de1e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-31T20:47:15.017366016Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:73a3238b486fd9d99d2e8b2231f93c20e10f762ea784ef4b934805fe802d9bc7,Metadata:&PodSand
boxMetadata{Name:kube-proxy-z67gm,Uid:5566d8d4-c936-4849-873d-d506b2428e2a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722458854315979177,Labels:map[string]string{controller-revision-hash: 6558c48888,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-z67gm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5566d8d4-c936-4849-873d-d506b2428e2a,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T20:47:14.907681456Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0b443bacbbfe3b945b0c0d315b2f9903a82572a26deb8b9d1bd93f7c85b83e84,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-2pjk5,Uid:70098609-a6f4-4ddc-9279-953658a29d44,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722458837166540645,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-2pjk5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 70098609-a6f4-4ddc-9279-953658a29d44,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T20:47:16.856684029Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=938c6854-4d07-44a0-95d4-8804d409d674 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 20:48:00 kubernetes-upgrade-519871 crio[2350]: time="2024-07-31 20:48:00.691799716Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1fb41564-5541-452c-9129-f2076f2c1a1c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:48:00 kubernetes-upgrade-519871 crio[2350]: time="2024-07-31 20:48:00.691886456Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1fb41564-5541-452c-9129-f2076f2c1a1c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:48:00 kubernetes-upgrade-519871 crio[2350]: time="2024-07-31 20:48:00.692716466Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85e488ea82c19028139636b16e8d40f29567983a0874220266b7987519c9a47a,PodSandboxId:73a3238b486fd9d99d2e8b2231f93c20e10f762ea784ef4b934805fe802d9bc7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722458876671928369,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z67gm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5566d8d4-c936-4849-873d-d506b2428e2a,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98c0058332a909b6ab08f67e46f120dded092517137675e6091182dabc0af71e,PodSandboxId:e0b6833195f5bf4f3bd593a81aba65a9883080aef3f70ed6334e6f4c037946b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722458876649937340,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 211cb9b4-1386-4ea6-a742-761b64b0de1e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ac13a0e24192f276cefb9eaa1f3878fbde51730f971e4a52acf2f97c2281061,PodSandboxId:f019f70fb881ab314f021b511e592b031be7e39eadf5163f46f9f24bffa34e0c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722458871928815052,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 667111f8934afeba41c84a25a3d24799,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddc1bed309534fa5ba5b2f2cc9b94a1bc6947eea73b27b94844e6e7c9dd0d0fc,PodSandboxId:71f2617883d1d3546a6af869b4be376681ba662c348d850e761cb28d4a7fc80c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722458871919968064,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9043c47b717400d740420554a4e7f23d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db47822bfa954cad0c43f82ff343f7a77749d95d7cf9fec0ad588bbd60c2716e,PodSandboxId:0f026a23f44a408bd8e2eb9ae65bdbdbcc6345cadc6bc01e0ee7731228a3f3ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722458871891269942,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17ae136b412dddbae2487c452afb7881,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3717159113f36a551d9b1bed8852e957d5d59a8ff8bb6245428221d955c10917,PodSandboxId:0807ee45609bf002b50e6f825101b4e8c35b3827232b322d133d815a5cc2c145,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722458871901289223,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c659cf2bec0f380d016377b15722e0fa,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c672fdcddb69a43385430ddca6f4bb5510364c35b7db37943aa9e3101323021,PodSandboxId:46ab02ca6a57058cc937fffec0f279ca328e8ce22a3edf98426ab3268649e837,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722458867185344562,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-5vd6c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e20e580-9626-4f5c-ba3c-e7afd42dd316,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5caa046fdfd032d8f1f604a94f1a2abf45e6bcc79da96a5f77f995ef34eb927,PodSandboxId:c9035083b93498f3a430a1e5ddcffb193d2ac730b46ae73b845a138fff95ad39,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722458856372287716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2pjk5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70098609-a6f4-4ddc-9279-953658a29d44,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a76274d8a3cdc97a5db3913793cf8850c4259716863899934322ba9a37e68d,PodSandboxId:46ab02ca6a57058cc937fffec0f279ca328e8ce22a3edf98426ab3268649e837,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722458856232407684,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-5vd6c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e20e580-9626-4f5c-ba3c-e7afd42dd316,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a428c65e5a6c716dca3785dc3886dcb23f8dd5916870bd139c1a37fb02797dce,PodSandboxId:0807ee45609bf002b50e6f825101b4e8c35b3827232b322d133d815a5cc2c145,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722458855170848376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c659cf2bec0f380d016377b15722e0fa,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f5ef507a419f0f76b80ef7eb59295049eabf256b84df0a07e8eba3da91d6ae8,PodSandboxId:f019f70fb881ab314f021b511e592b031be7e39eadf5163f46f9f24bffa34e0c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722458855215193301,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 667111f8934afeba41c84a25a3d24799,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4293ee6c2b653485b8fa4ee909cc4508f6a9368f4c2f8a05c8361b39035991f,PodSandboxId:71f2617883d1d3546a6af869b4be376681ba662c348d850e761cb28d4a7fc80c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722458855139741890,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9043c47b717400d740420554a4e7f23d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3e2bd761ef798923456da2e96fb21ccad273c8890e7bce32b7f0e2163308275,PodSandboxId:e0b6833195f5bf4f3bd593a81aba65a9883080aef3f70ed6334e6f4c037946b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:ma
p[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722458855027292061,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 211cb9b4-1386-4ea6-a742-761b64b0de1e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:012a445eae0656df3bc10043ab44565307ae6fa7133bb8e737d163e5c8e701e7,PodSandboxId:0f026a23f44a408bd8e2eb9ae65bdbdbcc6345cadc6bc01e0ee7731228a3f3ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722458855094993028,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17ae136b412dddbae2487c452afb7881,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d42fde9dab2aaacb0a2914818cfc1804ac24447e68a62fcc5a0bb92af220b1b1,PodSandboxId:73a3238b486fd9d99d2e8b2231f93c20e10f762ea784ef4b934805fe802d9bc7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722458854812430939,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z67gm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5566d8d4-c936-4849-873d-d506b2428e2a,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05ff373824206516de8fe5897a272df3af894784337332151383ab243275cdb4,PodSandboxId:0b443bacbbfe3b945b0c0d315b2f9903a82572a26deb8b9d1bd93f7c85b83e84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a6
74fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722458837574374450,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2pjk5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70098609-a6f4-4ddc-9279-953658a29d44,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1fb41564-5541-452c-9129-f2076f2c1a1c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:48:00 kubernetes-upgrade-519871 crio[2350]: time="2024-07-31 20:48:00.714218087Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=efb4dcef-d54e-40a4-98af-51af4f726ebe name=/runtime.v1.RuntimeService/Version
	Jul 31 20:48:00 kubernetes-upgrade-519871 crio[2350]: time="2024-07-31 20:48:00.714537244Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=efb4dcef-d54e-40a4-98af-51af4f726ebe name=/runtime.v1.RuntimeService/Version
	Jul 31 20:48:00 kubernetes-upgrade-519871 crio[2350]: time="2024-07-31 20:48:00.716237365Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b2284644-229d-4c48-a6e9-89ea08d3c517 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:48:00 kubernetes-upgrade-519871 crio[2350]: time="2024-07-31 20:48:00.716615246Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722458880716594190,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b2284644-229d-4c48-a6e9-89ea08d3c517 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:48:00 kubernetes-upgrade-519871 crio[2350]: time="2024-07-31 20:48:00.717241743Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81c6a141-21cb-4aa5-b854-d3008613c7c5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:48:00 kubernetes-upgrade-519871 crio[2350]: time="2024-07-31 20:48:00.717314148Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81c6a141-21cb-4aa5-b854-d3008613c7c5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:48:00 kubernetes-upgrade-519871 crio[2350]: time="2024-07-31 20:48:00.717623055Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85e488ea82c19028139636b16e8d40f29567983a0874220266b7987519c9a47a,PodSandboxId:73a3238b486fd9d99d2e8b2231f93c20e10f762ea784ef4b934805fe802d9bc7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722458876671928369,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z67gm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5566d8d4-c936-4849-873d-d506b2428e2a,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98c0058332a909b6ab08f67e46f120dded092517137675e6091182dabc0af71e,PodSandboxId:e0b6833195f5bf4f3bd593a81aba65a9883080aef3f70ed6334e6f4c037946b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722458876649937340,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 211cb9b4-1386-4ea6-a742-761b64b0de1e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ac13a0e24192f276cefb9eaa1f3878fbde51730f971e4a52acf2f97c2281061,PodSandboxId:f019f70fb881ab314f021b511e592b031be7e39eadf5163f46f9f24bffa34e0c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722458871928815052,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 667111f8934afeba41c84a25a3d24799,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddc1bed309534fa5ba5b2f2cc9b94a1bc6947eea73b27b94844e6e7c9dd0d0fc,PodSandboxId:71f2617883d1d3546a6af869b4be376681ba662c348d850e761cb28d4a7fc80c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722458871919968064,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9043c47b717400d740420554a4e7f23d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db47822bfa954cad0c43f82ff343f7a77749d95d7cf9fec0ad588bbd60c2716e,PodSandboxId:0f026a23f44a408bd8e2eb9ae65bdbdbcc6345cadc6bc01e0ee7731228a3f3ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722458871891269942,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17ae136b412dddbae2487c452afb7881,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3717159113f36a551d9b1bed8852e957d5d59a8ff8bb6245428221d955c10917,PodSandboxId:0807ee45609bf002b50e6f825101b4e8c35b3827232b322d133d815a5cc2c145,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722458871901289223,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c659cf2bec0f380d016377b15722e0fa,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c672fdcddb69a43385430ddca6f4bb5510364c35b7db37943aa9e3101323021,PodSandboxId:46ab02ca6a57058cc937fffec0f279ca328e8ce22a3edf98426ab3268649e837,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722458867185344562,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-5vd6c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e20e580-9626-4f5c-ba3c-e7afd42dd316,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5caa046fdfd032d8f1f604a94f1a2abf45e6bcc79da96a5f77f995ef34eb927,PodSandboxId:c9035083b93498f3a430a1e5ddcffb193d2ac730b46ae73b845a138fff95ad39,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722458856372287716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2pjk5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70098609-a6f4-4ddc-9279-953658a29d44,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a76274d8a3cdc97a5db3913793cf8850c4259716863899934322ba9a37e68d,PodSandboxId:46ab02ca6a57058cc937fffec0f279ca328e8ce22a3edf98426ab3268649e837,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722458856232407684,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-5vd6c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e20e580-9626-4f5c-ba3c-e7afd42dd316,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a428c65e5a6c716dca3785dc3886dcb23f8dd5916870bd139c1a37fb02797dce,PodSandboxId:0807ee45609bf002b50e6f825101b4e8c35b3827232b322d133d815a5cc2c145,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722458855170848376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c659cf2bec0f380d016377b15722e0fa,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f5ef507a419f0f76b80ef7eb59295049eabf256b84df0a07e8eba3da91d6ae8,PodSandboxId:f019f70fb881ab314f021b511e592b031be7e39eadf5163f46f9f24bffa34e0c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722458855215193301,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 667111f8934afeba41c84a25a3d24799,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4293ee6c2b653485b8fa4ee909cc4508f6a9368f4c2f8a05c8361b39035991f,PodSandboxId:71f2617883d1d3546a6af869b4be376681ba662c348d850e761cb28d4a7fc80c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722458855139741890,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9043c47b717400d740420554a4e7f23d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3e2bd761ef798923456da2e96fb21ccad273c8890e7bce32b7f0e2163308275,PodSandboxId:e0b6833195f5bf4f3bd593a81aba65a9883080aef3f70ed6334e6f4c037946b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:ma
p[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722458855027292061,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 211cb9b4-1386-4ea6-a742-761b64b0de1e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:012a445eae0656df3bc10043ab44565307ae6fa7133bb8e737d163e5c8e701e7,PodSandboxId:0f026a23f44a408bd8e2eb9ae65bdbdbcc6345cadc6bc01e0ee7731228a3f3ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722458855094993028,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17ae136b412dddbae2487c452afb7881,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d42fde9dab2aaacb0a2914818cfc1804ac24447e68a62fcc5a0bb92af220b1b1,PodSandboxId:73a3238b486fd9d99d2e8b2231f93c20e10f762ea784ef4b934805fe802d9bc7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722458854812430939,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z67gm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5566d8d4-c936-4849-873d-d506b2428e2a,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05ff373824206516de8fe5897a272df3af894784337332151383ab243275cdb4,PodSandboxId:0b443bacbbfe3b945b0c0d315b2f9903a82572a26deb8b9d1bd93f7c85b83e84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a6
74fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722458837574374450,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2pjk5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70098609-a6f4-4ddc-9279-953658a29d44,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81c6a141-21cb-4aa5-b854-d3008613c7c5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:48:00 kubernetes-upgrade-519871 crio[2350]: time="2024-07-31 20:48:00.768911976Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=01f36648-ba22-4a0c-81b5-74375f3bd0de name=/runtime.v1.RuntimeService/Version
	Jul 31 20:48:00 kubernetes-upgrade-519871 crio[2350]: time="2024-07-31 20:48:00.769004799Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=01f36648-ba22-4a0c-81b5-74375f3bd0de name=/runtime.v1.RuntimeService/Version
	Jul 31 20:48:00 kubernetes-upgrade-519871 crio[2350]: time="2024-07-31 20:48:00.770619562Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dc8caa74-8e48-4925-83ca-693f3192eabb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:48:00 kubernetes-upgrade-519871 crio[2350]: time="2024-07-31 20:48:00.771212719Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722458880771189736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dc8caa74-8e48-4925-83ca-693f3192eabb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 20:48:00 kubernetes-upgrade-519871 crio[2350]: time="2024-07-31 20:48:00.772067812Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=08807f96-82c8-49b3-a34e-335b0409a62e name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:48:00 kubernetes-upgrade-519871 crio[2350]: time="2024-07-31 20:48:00.772144216Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=08807f96-82c8-49b3-a34e-335b0409a62e name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 20:48:00 kubernetes-upgrade-519871 crio[2350]: time="2024-07-31 20:48:00.772608063Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85e488ea82c19028139636b16e8d40f29567983a0874220266b7987519c9a47a,PodSandboxId:73a3238b486fd9d99d2e8b2231f93c20e10f762ea784ef4b934805fe802d9bc7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722458876671928369,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z67gm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5566d8d4-c936-4849-873d-d506b2428e2a,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98c0058332a909b6ab08f67e46f120dded092517137675e6091182dabc0af71e,PodSandboxId:e0b6833195f5bf4f3bd593a81aba65a9883080aef3f70ed6334e6f4c037946b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722458876649937340,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 211cb9b4-1386-4ea6-a742-761b64b0de1e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ac13a0e24192f276cefb9eaa1f3878fbde51730f971e4a52acf2f97c2281061,PodSandboxId:f019f70fb881ab314f021b511e592b031be7e39eadf5163f46f9f24bffa34e0c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722458871928815052,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 667111f8934afeba41c84a25a3d24799,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddc1bed309534fa5ba5b2f2cc9b94a1bc6947eea73b27b94844e6e7c9dd0d0fc,PodSandboxId:71f2617883d1d3546a6af869b4be376681ba662c348d850e761cb28d4a7fc80c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722458871919968064,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9043c47b717400d740420554a4e7f23d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db47822bfa954cad0c43f82ff343f7a77749d95d7cf9fec0ad588bbd60c2716e,PodSandboxId:0f026a23f44a408bd8e2eb9ae65bdbdbcc6345cadc6bc01e0ee7731228a3f3ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722458871891269942,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17ae136b412dddbae2487c452afb7881,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3717159113f36a551d9b1bed8852e957d5d59a8ff8bb6245428221d955c10917,PodSandboxId:0807ee45609bf002b50e6f825101b4e8c35b3827232b322d133d815a5cc2c145,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722458871901289223,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c659cf2bec0f380d016377b15722e0fa,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c672fdcddb69a43385430ddca6f4bb5510364c35b7db37943aa9e3101323021,PodSandboxId:46ab02ca6a57058cc937fffec0f279ca328e8ce22a3edf98426ab3268649e837,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722458867185344562,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-5vd6c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e20e580-9626-4f5c-ba3c-e7afd42dd316,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5caa046fdfd032d8f1f604a94f1a2abf45e6bcc79da96a5f77f995ef34eb927,PodSandboxId:c9035083b93498f3a430a1e5ddcffb193d2ac730b46ae73b845a138fff95ad39,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722458856372287716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2pjk5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70098609-a6f4-4ddc-9279-953658a29d44,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a76274d8a3cdc97a5db3913793cf8850c4259716863899934322ba9a37e68d,PodSandboxId:46ab02ca6a57058cc937fffec0f279ca328e8ce22a3edf98426ab3268649e837,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722458856232407684,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-5vd6c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e20e580-9626-4f5c-ba3c-e7afd42dd316,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a428c65e5a6c716dca3785dc3886dcb23f8dd5916870bd139c1a37fb02797dce,PodSandboxId:0807ee45609bf002b50e6f825101b4e8c35b3827232b322d133d815a5cc2c145,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722458855170848376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c659cf2bec0f380d016377b15722e0fa,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f5ef507a419f0f76b80ef7eb59295049eabf256b84df0a07e8eba3da91d6ae8,PodSandboxId:f019f70fb881ab314f021b511e592b031be7e39eadf5163f46f9f24bffa34e0c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722458855215193301,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 667111f8934afeba41c84a25a3d24799,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4293ee6c2b653485b8fa4ee909cc4508f6a9368f4c2f8a05c8361b39035991f,PodSandboxId:71f2617883d1d3546a6af869b4be376681ba662c348d850e761cb28d4a7fc80c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722458855139741890,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9043c47b717400d740420554a4e7f23d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3e2bd761ef798923456da2e96fb21ccad273c8890e7bce32b7f0e2163308275,PodSandboxId:e0b6833195f5bf4f3bd593a81aba65a9883080aef3f70ed6334e6f4c037946b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:ma
p[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722458855027292061,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 211cb9b4-1386-4ea6-a742-761b64b0de1e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:012a445eae0656df3bc10043ab44565307ae6fa7133bb8e737d163e5c8e701e7,PodSandboxId:0f026a23f44a408bd8e2eb9ae65bdbdbcc6345cadc6bc01e0ee7731228a3f3ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722458855094993028,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-519871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17ae136b412dddbae2487c452afb7881,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d42fde9dab2aaacb0a2914818cfc1804ac24447e68a62fcc5a0bb92af220b1b1,PodSandboxId:73a3238b486fd9d99d2e8b2231f93c20e10f762ea784ef4b934805fe802d9bc7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722458854812430939,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z67gm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5566d8d4-c936-4849-873d-d506b2428e2a,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05ff373824206516de8fe5897a272df3af894784337332151383ab243275cdb4,PodSandboxId:0b443bacbbfe3b945b0c0d315b2f9903a82572a26deb8b9d1bd93f7c85b83e84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a6
74fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722458837574374450,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2pjk5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70098609-a6f4-4ddc-9279-953658a29d44,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=08807f96-82c8-49b3-a34e-335b0409a62e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	85e488ea82c19       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   4 seconds ago       Running             kube-proxy                2                   73a3238b486fd       kube-proxy-z67gm
	98c0058332a90       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   4 seconds ago       Running             storage-provisioner       3                   e0b6833195f5b       storage-provisioner
	1ac13a0e24192       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   8 seconds ago       Running             kube-controller-manager   2                   f019f70fb881a       kube-controller-manager-kubernetes-upgrade-519871
	ddc1bed309534       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   8 seconds ago       Running             kube-apiserver            2                   71f2617883d1d       kube-apiserver-kubernetes-upgrade-519871
	3717159113f36       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   8 seconds ago       Running             kube-scheduler            2                   0807ee45609bf       kube-scheduler-kubernetes-upgrade-519871
	db47822bfa954       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   8 seconds ago       Running             etcd                      2                   0f026a23f44a4       etcd-kubernetes-upgrade-519871
	9c672fdcddb69       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 seconds ago      Running             coredns                   2                   46ab02ca6a570       coredns-5cfdc65f69-5vd6c
	e5caa046fdfd0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   24 seconds ago      Running             coredns                   1                   c9035083b9349       coredns-5cfdc65f69-2pjk5
	c7a76274d8a3c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   24 seconds ago      Exited              coredns                   1                   46ab02ca6a570       coredns-5cfdc65f69-5vd6c
	9f5ef507a419f       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   25 seconds ago      Exited              kube-controller-manager   1                   f019f70fb881a       kube-controller-manager-kubernetes-upgrade-519871
	a428c65e5a6c7       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   25 seconds ago      Exited              kube-scheduler            1                   0807ee45609bf       kube-scheduler-kubernetes-upgrade-519871
	a4293ee6c2b65       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   25 seconds ago      Exited              kube-apiserver            1                   71f2617883d1d       kube-apiserver-kubernetes-upgrade-519871
	012a445eae065       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   25 seconds ago      Exited              etcd                      1                   0f026a23f44a4       etcd-kubernetes-upgrade-519871
	d3e2bd761ef79       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   25 seconds ago      Exited              storage-provisioner       2                   e0b6833195f5b       storage-provisioner
	d42fde9dab2aa       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   26 seconds ago      Exited              kube-proxy                1                   73a3238b486fd       kube-proxy-z67gm
	05ff373824206       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   43 seconds ago      Exited              coredns                   0                   0b443bacbbfe3       coredns-5cfdc65f69-2pjk5
	
	
	==> coredns [05ff373824206516de8fe5897a272df3af894784337332151383ab243275cdb4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9c672fdcddb69a43385430ddca6f4bb5510364c35b7db37943aa9e3101323021] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [c7a76274d8a3cdc97a5db3913793cf8850c4259716863899934322ba9a37e68d] <==
	
	
	==> coredns [e5caa046fdfd032d8f1f604a94f1a2abf45e6bcc79da96a5f77f995ef34eb927] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-519871
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-519871
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 20:47:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-519871
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 20:47:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 20:47:55 +0000   Wed, 31 Jul 2024 20:47:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 20:47:55 +0000   Wed, 31 Jul 2024 20:47:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 20:47:55 +0000   Wed, 31 Jul 2024 20:47:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 20:47:55 +0000   Wed, 31 Jul 2024 20:47:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.217
	  Hostname:    kubernetes-upgrade-519871
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb30e92f7eda45a18a425983aa3ccef0
	  System UUID:                cb30e92f-7eda-45a1-8a42-5983aa3ccef0
	  Boot ID:                    603b1c49-c463-427d-822d-38b0eba2f40a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-2pjk5                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     47s
	  kube-system                 coredns-5cfdc65f69-5vd6c                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     47s
	  kube-system                 etcd-kubernetes-upgrade-519871                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         46s
	  kube-system                 kube-apiserver-kubernetes-upgrade-519871             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-519871    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         49s
	  kube-system                 kube-proxy-z67gm                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         47s
	  kube-system                 kube-scheduler-kubernetes-upgrade-519871             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 45s                kube-proxy       
	  Normal  Starting                 4s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  58s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node kubernetes-upgrade-519871 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node kubernetes-upgrade-519871 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x7 over 58s)  kubelet          Node kubernetes-upgrade-519871 status is now: NodeHasSufficientPID
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           47s                node-controller  Node kubernetes-upgrade-519871 event: Registered Node kubernetes-upgrade-519871 in Controller
	  Normal  Starting                 10s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10s (x8 over 10s)  kubelet          Node kubernetes-upgrade-519871 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10s (x8 over 10s)  kubelet          Node kubernetes-upgrade-519871 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10s (x7 over 10s)  kubelet          Node kubernetes-upgrade-519871 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2s                 node-controller  Node kubernetes-upgrade-519871 event: Registered Node kubernetes-upgrade-519871 in Controller
	
	
	==> dmesg <==
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.549704] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.069644] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064108] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.218333] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.146473] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.348822] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[Jul31 20:47] systemd-fstab-generator[740]: Ignoring "noauto" option for root device
	[  +0.066481] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.515232] systemd-fstab-generator[867]: Ignoring "noauto" option for root device
	[  +8.997811] systemd-fstab-generator[1256]: Ignoring "noauto" option for root device
	[  +0.103189] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.156629] kauditd_printk_skb: 70 callbacks suppressed
	[ +12.479947] systemd-fstab-generator[2269]: Ignoring "noauto" option for root device
	[  +0.110550] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.078620] systemd-fstab-generator[2281]: Ignoring "noauto" option for root device
	[  +0.217574] systemd-fstab-generator[2295]: Ignoring "noauto" option for root device
	[  +0.202049] systemd-fstab-generator[2307]: Ignoring "noauto" option for root device
	[  +0.334842] systemd-fstab-generator[2335]: Ignoring "noauto" option for root device
	[  +3.373773] systemd-fstab-generator[2487]: Ignoring "noauto" option for root device
	[  +0.805462] kauditd_printk_skb: 145 callbacks suppressed
	[ +12.416064] kauditd_printk_skb: 77 callbacks suppressed
	[  +3.831853] systemd-fstab-generator[3598]: Ignoring "noauto" option for root device
	[  +5.714512] kauditd_printk_skb: 48 callbacks suppressed
	[  +1.256031] systemd-fstab-generator[4065]: Ignoring "noauto" option for root device
	
	
	==> etcd [012a445eae0656df3bc10043ab44565307ae6fa7133bb8e737d163e5c8e701e7] <==
	{"level":"info","ts":"2024-07-31T20:47:36.531989Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4464300d980c5173 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T20:47:36.532026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4464300d980c5173 received MsgPreVoteResp from 4464300d980c5173 at term 2"}
	{"level":"info","ts":"2024-07-31T20:47:36.532039Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4464300d980c5173 became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T20:47:36.532045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4464300d980c5173 received MsgVoteResp from 4464300d980c5173 at term 3"}
	{"level":"info","ts":"2024-07-31T20:47:36.532054Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4464300d980c5173 became leader at term 3"}
	{"level":"info","ts":"2024-07-31T20:47:36.532061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4464300d980c5173 elected leader 4464300d980c5173 at term 3"}
	{"level":"info","ts":"2024-07-31T20:47:36.545219Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"4464300d980c5173","local-member-attributes":"{Name:kubernetes-upgrade-519871 ClientURLs:[https://192.168.72.217:2379]}","request-path":"/0/members/4464300d980c5173/attributes","cluster-id":"e7442b76738c43c9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T20:47:36.545414Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T20:47:36.545941Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T20:47:36.546719Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-31T20:47:36.575827Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T20:47:36.575882Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T20:47:36.578559Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T20:47:36.588187Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-31T20:47:36.6034Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.217:2379"}
	{"level":"info","ts":"2024-07-31T20:47:48.958952Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-31T20:47:48.959038Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"kubernetes-upgrade-519871","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.217:2380"],"advertise-client-urls":["https://192.168.72.217:2379"]}
	{"level":"warn","ts":"2024-07-31T20:47:48.959154Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T20:47:48.959184Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T20:47:48.961022Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.72.217:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T20:47:48.961078Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.72.217:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-31T20:47:48.961146Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"4464300d980c5173","current-leader-member-id":"4464300d980c5173"}
	{"level":"info","ts":"2024-07-31T20:47:48.965275Z","caller":"embed/etcd.go:580","msg":"stopping serving peer traffic","address":"192.168.72.217:2380"}
	{"level":"info","ts":"2024-07-31T20:47:48.965348Z","caller":"embed/etcd.go:585","msg":"stopped serving peer traffic","address":"192.168.72.217:2380"}
	{"level":"info","ts":"2024-07-31T20:47:48.965355Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"kubernetes-upgrade-519871","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.217:2380"],"advertise-client-urls":["https://192.168.72.217:2379"]}
	
	
	==> etcd [db47822bfa954cad0c43f82ff343f7a77749d95d7cf9fec0ad588bbd60c2716e] <==
	{"level":"info","ts":"2024-07-31T20:47:52.323705Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e7442b76738c43c9","local-member-id":"4464300d980c5173","added-peer-id":"4464300d980c5173","added-peer-peer-urls":["https://192.168.72.217:2380"]}
	{"level":"info","ts":"2024-07-31T20:47:52.3239Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e7442b76738c43c9","local-member-id":"4464300d980c5173","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T20:47:52.323967Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T20:47:52.328312Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-31T20:47:52.330509Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T20:47:52.330739Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"4464300d980c5173","initial-advertise-peer-urls":["https://192.168.72.217:2380"],"listen-peer-urls":["https://192.168.72.217:2380"],"advertise-client-urls":["https://192.168.72.217:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.217:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T20:47:52.332871Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T20:47:52.332989Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.72.217:2380"}
	{"level":"info","ts":"2024-07-31T20:47:52.33393Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.72.217:2380"}
	{"level":"info","ts":"2024-07-31T20:47:53.575845Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4464300d980c5173 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-31T20:47:53.575919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4464300d980c5173 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-31T20:47:53.576009Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4464300d980c5173 received MsgPreVoteResp from 4464300d980c5173 at term 3"}
	{"level":"info","ts":"2024-07-31T20:47:53.576058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4464300d980c5173 became candidate at term 4"}
	{"level":"info","ts":"2024-07-31T20:47:53.57607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4464300d980c5173 received MsgVoteResp from 4464300d980c5173 at term 4"}
	{"level":"info","ts":"2024-07-31T20:47:53.576085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4464300d980c5173 became leader at term 4"}
	{"level":"info","ts":"2024-07-31T20:47:53.576099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4464300d980c5173 elected leader 4464300d980c5173 at term 4"}
	{"level":"info","ts":"2024-07-31T20:47:53.582491Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"4464300d980c5173","local-member-attributes":"{Name:kubernetes-upgrade-519871 ClientURLs:[https://192.168.72.217:2379]}","request-path":"/0/members/4464300d980c5173/attributes","cluster-id":"e7442b76738c43c9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T20:47:53.582555Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T20:47:53.582977Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T20:47:53.591114Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-31T20:47:53.596427Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T20:47:53.597503Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-31T20:47:53.599263Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.217:2379"}
	{"level":"info","ts":"2024-07-31T20:47:53.598092Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T20:47:53.661841Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 20:48:01 up 1 min,  0 users,  load average: 2.36, 0.69, 0.24
	Linux kubernetes-upgrade-519871 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a4293ee6c2b653485b8fa4ee909cc4508f6a9368f4c2f8a05c8361b39035991f] <==
	E0731 20:47:38.884974       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0731 20:47:38.886383       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.378703ms" method="PUT" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-eaap3qg4fpyhghkktu7geidzu4" result=null
	I0731 20:47:38.900364       1 controller.go:157] Shutting down quota evaluator
	I0731 20:47:38.912636       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0731 20:47:38.914055       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 33.582845ms, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	W0731 20:47:39.651359       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0731 20:47:39.651479       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0731 20:47:40.651194       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0731 20:47:40.651203       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0731 20:47:41.651369       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0731 20:47:41.651445       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0731 20:47:42.652193       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0731 20:47:42.652416       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0731 20:47:43.651128       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0731 20:47:43.651138       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0731 20:47:44.651326       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0731 20:47:44.651576       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	E0731 20:47:45.651282       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0731 20:47:45.651279       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	W0731 20:47:46.651523       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0731 20:47:46.651667       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0731 20:47:47.650919       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0731 20:47:47.651544       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0731 20:47:48.651277       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0731 20:47:48.651411       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-apiserver [ddc1bed309534fa5ba5b2f2cc9b94a1bc6947eea73b27b94844e6e7c9dd0d0fc] <==
	I0731 20:47:55.704327       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0731 20:47:55.819165       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0731 20:47:55.819252       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0731 20:47:55.819756       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0731 20:47:55.821178       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 20:47:55.822164       1 shared_informer.go:320] Caches are synced for configmaps
	I0731 20:47:55.834676       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0731 20:47:55.837929       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0731 20:47:55.838268       1 aggregator.go:171] initial CRD sync complete...
	I0731 20:47:55.838333       1 autoregister_controller.go:144] Starting autoregister controller
	I0731 20:47:55.838344       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 20:47:55.838352       1 cache.go:39] Caches are synced for autoregister controller
	I0731 20:47:55.882554       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 20:47:55.886277       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 20:47:55.886329       1 policy_source.go:224] refreshing policies
	I0731 20:47:55.905219       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 20:47:55.921838       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 20:47:56.638108       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 20:47:57.698077       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 20:47:57.713586       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 20:47:57.785224       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 20:47:57.851523       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 20:47:57.865375       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 20:47:59.269616       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 20:48:00.052961       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [1ac13a0e24192f276cefb9eaa1f3878fbde51730f971e4a52acf2f97c2281061] <==
	I0731 20:47:59.787697       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0731 20:47:59.787994       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="122.895µs"
	I0731 20:47:59.801972       1 shared_informer.go:320] Caches are synced for PVC protection
	I0731 20:47:59.806538       1 shared_informer.go:320] Caches are synced for deployment
	I0731 20:47:59.832977       1 shared_informer.go:320] Caches are synced for taint
	I0731 20:47:59.833189       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0731 20:47:59.833377       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-519871"
	I0731 20:47:59.833486       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0731 20:47:59.847902       1 shared_informer.go:320] Caches are synced for namespace
	I0731 20:47:59.982154       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0731 20:48:00.032678       1 shared_informer.go:320] Caches are synced for stateful set
	I0731 20:48:00.035155       1 shared_informer.go:320] Caches are synced for job
	I0731 20:48:00.035165       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0731 20:48:00.038413       1 shared_informer.go:320] Caches are synced for disruption
	I0731 20:48:00.038476       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0731 20:48:00.038536       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-519871"
	I0731 20:48:00.055483       1 shared_informer.go:320] Caches are synced for attach detach
	I0731 20:48:00.081544       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0731 20:48:00.083150       1 shared_informer.go:320] Caches are synced for cronjob
	I0731 20:48:00.143632       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 20:48:00.176682       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 20:48:00.278964       1 shared_informer.go:320] Caches are synced for persistent volume
	I0731 20:48:00.282430       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 20:48:00.282471       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0731 20:48:00.305918       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [9f5ef507a419f0f76b80ef7eb59295049eabf256b84df0a07e8eba3da91d6ae8] <==
	
	
	==> kube-proxy [85e488ea82c19028139636b16e8d40f29567983a0874220266b7987519c9a47a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0731 20:47:57.025145       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0731 20:47:57.039864       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.72.217"]
	E0731 20:47:57.039945       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0731 20:47:57.100822       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0731 20:47:57.100874       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 20:47:57.100908       1 server_linux.go:170] "Using iptables Proxier"
	I0731 20:47:57.107914       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0731 20:47:57.108179       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0731 20:47:57.108210       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 20:47:57.110605       1 config.go:197] "Starting service config controller"
	I0731 20:47:57.110638       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 20:47:57.110659       1 config.go:104] "Starting endpoint slice config controller"
	I0731 20:47:57.110673       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 20:47:57.112665       1 config.go:326] "Starting node config controller"
	I0731 20:47:57.112697       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 20:47:57.211546       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 20:47:57.211642       1 shared_informer.go:320] Caches are synced for service config
	I0731 20:47:57.212915       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [d42fde9dab2aaacb0a2914818cfc1804ac24447e68a62fcc5a0bb92af220b1b1] <==
	E0731 20:47:38.944670       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.72.217:8443: connect: connection refused" logger="UnhandledError"
	W0731 20:47:38.944125       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-519871&limit=500&resourceVersion=0": dial tcp 192.168.72.217:8443: connect: connection refused
	E0731 20:47:38.944745       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-519871&limit=500&resourceVersion=0\": dial tcp 192.168.72.217:8443: connect: connection refused" logger="UnhandledError"
	I0731 20:47:38.944205       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 20:47:38.944234       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 20:47:38.944323       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0731 20:47:38.944942       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.72.217:8443: connect: connection refused" logger="UnhandledError"
	W0731 20:47:40.068917       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.72.217:8443: connect: connection refused
	E0731 20:47:40.069046       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.72.217:8443: connect: connection refused" logger="UnhandledError"
	W0731 20:47:40.331857       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-519871&limit=500&resourceVersion=0": dial tcp 192.168.72.217:8443: connect: connection refused
	E0731 20:47:40.331992       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-519871&limit=500&resourceVersion=0\": dial tcp 192.168.72.217:8443: connect: connection refused" logger="UnhandledError"
	W0731 20:47:40.531068       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.72.217:8443: connect: connection refused
	E0731 20:47:40.531258       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.72.217:8443: connect: connection refused" logger="UnhandledError"
	W0731 20:47:42.490382       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.72.217:8443: connect: connection refused
	E0731 20:47:42.490626       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.72.217:8443: connect: connection refused" logger="UnhandledError"
	W0731 20:47:42.729513       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.72.217:8443: connect: connection refused
	E0731 20:47:42.729666       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.72.217:8443: connect: connection refused" logger="UnhandledError"
	W0731 20:47:43.237938       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-519871&limit=500&resourceVersion=0": dial tcp 192.168.72.217:8443: connect: connection refused
	E0731 20:47:43.237994       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-519871&limit=500&resourceVersion=0\": dial tcp 192.168.72.217:8443: connect: connection refused" logger="UnhandledError"
	W0731 20:47:45.900890       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.72.217:8443: connect: connection refused
	E0731 20:47:45.900991       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.72.217:8443: connect: connection refused" logger="UnhandledError"
	W0731 20:47:46.544922       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-519871&limit=500&resourceVersion=0": dial tcp 192.168.72.217:8443: connect: connection refused
	E0731 20:47:46.545000       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-519871&limit=500&resourceVersion=0\": dial tcp 192.168.72.217:8443: connect: connection refused" logger="UnhandledError"
	W0731 20:47:46.770068       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.72.217:8443: connect: connection refused
	E0731 20:47:46.770164       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.72.217:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-scheduler [3717159113f36a551d9b1bed8852e957d5d59a8ff8bb6245428221d955c10917] <==
	I0731 20:47:53.325451       1 serving.go:386] Generated self-signed cert in-memory
	W0731 20:47:55.725535       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 20:47:55.725640       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 20:47:55.725703       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 20:47:55.725712       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 20:47:55.815624       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0731 20:47:55.815738       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 20:47:55.835174       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 20:47:55.836369       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 20:47:55.836410       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0731 20:47:55.836445       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 20:47:55.937983       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [a428c65e5a6c716dca3785dc3886dcb23f8dd5916870bd139c1a37fb02797dce] <==
	I0731 20:47:38.096241       1 serving.go:386] Generated self-signed cert in-memory
	W0731 20:47:38.696479       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 20:47:38.696578       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 20:47:38.696588       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 20:47:38.696597       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 20:47:38.784673       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0731 20:47:38.784725       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0731 20:47:38.787919       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0731 20:47:38.794694       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0731 20:47:38.794839       1 server.go:237] "waiting for handlers to sync" err="context canceled"
	E0731 20:47:38.795032       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 31 20:47:51 kubernetes-upgrade-519871 kubelet[3605]: I0731 20:47:51.669719    3605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9043c47b717400d740420554a4e7f23d-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-519871\" (UID: \"9043c47b717400d740420554a4e7f23d\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-519871"
	Jul 31 20:47:51 kubernetes-upgrade-519871 kubelet[3605]: I0731 20:47:51.669757    3605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/667111f8934afeba41c84a25a3d24799-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-519871\" (UID: \"667111f8934afeba41c84a25a3d24799\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-519871"
	Jul 31 20:47:51 kubernetes-upgrade-519871 kubelet[3605]: I0731 20:47:51.669842    3605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/667111f8934afeba41c84a25a3d24799-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-519871\" (UID: \"667111f8934afeba41c84a25a3d24799\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-519871"
	Jul 31 20:47:51 kubernetes-upgrade-519871 kubelet[3605]: I0731 20:47:51.669865    3605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/667111f8934afeba41c84a25a3d24799-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-519871\" (UID: \"667111f8934afeba41c84a25a3d24799\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-519871"
	Jul 31 20:47:51 kubernetes-upgrade-519871 kubelet[3605]: I0731 20:47:51.854932    3605 scope.go:117] "RemoveContainer" containerID="012a445eae0656df3bc10043ab44565307ae6fa7133bb8e737d163e5c8e701e7"
	Jul 31 20:47:51 kubernetes-upgrade-519871 kubelet[3605]: I0731 20:47:51.855983    3605 scope.go:117] "RemoveContainer" containerID="a4293ee6c2b653485b8fa4ee909cc4508f6a9368f4c2f8a05c8361b39035991f"
	Jul 31 20:47:51 kubernetes-upgrade-519871 kubelet[3605]: I0731 20:47:51.858566    3605 scope.go:117] "RemoveContainer" containerID="a428c65e5a6c716dca3785dc3886dcb23f8dd5916870bd139c1a37fb02797dce"
	Jul 31 20:47:51 kubernetes-upgrade-519871 kubelet[3605]: I0731 20:47:51.858932    3605 scope.go:117] "RemoveContainer" containerID="9f5ef507a419f0f76b80ef7eb59295049eabf256b84df0a07e8eba3da91d6ae8"
	Jul 31 20:47:51 kubernetes-upgrade-519871 kubelet[3605]: E0731 20:47:51.976996    3605 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-519871?timeout=10s\": dial tcp 192.168.72.217:8443: connect: connection refused" interval="800ms"
	Jul 31 20:47:52 kubernetes-upgrade-519871 kubelet[3605]: I0731 20:47:52.062820    3605 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-519871"
	Jul 31 20:47:52 kubernetes-upgrade-519871 kubelet[3605]: E0731 20:47:52.063824    3605 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.217:8443: connect: connection refused" node="kubernetes-upgrade-519871"
	Jul 31 20:47:52 kubernetes-upgrade-519871 kubelet[3605]: W0731 20:47:52.379145    3605 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-519871&limit=500&resourceVersion=0": dial tcp 192.168.72.217:8443: connect: connection refused
	Jul 31 20:47:52 kubernetes-upgrade-519871 kubelet[3605]: E0731 20:47:52.379240    3605 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-519871&limit=500&resourceVersion=0\": dial tcp 192.168.72.217:8443: connect: connection refused" logger="UnhandledError"
	Jul 31 20:47:52 kubernetes-upgrade-519871 kubelet[3605]: I0731 20:47:52.865869    3605 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-519871"
	Jul 31 20:47:55 kubernetes-upgrade-519871 kubelet[3605]: I0731 20:47:55.950721    3605 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-519871"
	Jul 31 20:47:55 kubernetes-upgrade-519871 kubelet[3605]: I0731 20:47:55.950940    3605 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-519871"
	Jul 31 20:47:55 kubernetes-upgrade-519871 kubelet[3605]: I0731 20:47:55.951030    3605 kuberuntime_manager.go:1524] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 31 20:47:55 kubernetes-upgrade-519871 kubelet[3605]: I0731 20:47:55.952576    3605 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 31 20:47:56 kubernetes-upgrade-519871 kubelet[3605]: I0731 20:47:56.312640    3605 apiserver.go:52] "Watching apiserver"
	Jul 31 20:47:56 kubernetes-upgrade-519871 kubelet[3605]: I0731 20:47:56.345428    3605 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jul 31 20:47:56 kubernetes-upgrade-519871 kubelet[3605]: I0731 20:47:56.366499    3605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/211cb9b4-1386-4ea6-a742-761b64b0de1e-tmp\") pod \"storage-provisioner\" (UID: \"211cb9b4-1386-4ea6-a742-761b64b0de1e\") " pod="kube-system/storage-provisioner"
	Jul 31 20:47:56 kubernetes-upgrade-519871 kubelet[3605]: I0731 20:47:56.374518    3605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5566d8d4-c936-4849-873d-d506b2428e2a-xtables-lock\") pod \"kube-proxy-z67gm\" (UID: \"5566d8d4-c936-4849-873d-d506b2428e2a\") " pod="kube-system/kube-proxy-z67gm"
	Jul 31 20:47:56 kubernetes-upgrade-519871 kubelet[3605]: I0731 20:47:56.376465    3605 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5566d8d4-c936-4849-873d-d506b2428e2a-lib-modules\") pod \"kube-proxy-z67gm\" (UID: \"5566d8d4-c936-4849-873d-d506b2428e2a\") " pod="kube-system/kube-proxy-z67gm"
	Jul 31 20:47:56 kubernetes-upgrade-519871 kubelet[3605]: I0731 20:47:56.626438    3605 scope.go:117] "RemoveContainer" containerID="d3e2bd761ef798923456da2e96fb21ccad273c8890e7bce32b7f0e2163308275"
	Jul 31 20:47:56 kubernetes-upgrade-519871 kubelet[3605]: I0731 20:47:56.630534    3605 scope.go:117] "RemoveContainer" containerID="d42fde9dab2aaacb0a2914818cfc1804ac24447e68a62fcc5a0bb92af220b1b1"
	
	
	==> storage-provisioner [98c0058332a909b6ab08f67e46f120dded092517137675e6091182dabc0af71e] <==
	I0731 20:47:56.895370       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 20:47:56.924514       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 20:47:56.924740       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [d3e2bd761ef798923456da2e96fb21ccad273c8890e7bce32b7f0e2163308275] <==
	I0731 20:47:35.964046       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 20:47:38.811963       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 20:47:38.813095       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0731 20:47:42.288552       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0731 20:47:46.546425       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-519871 -n kubernetes-upgrade-519871
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-519871 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-519871" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-519871
--- FAIL: TestKubernetesUpgrade (432.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (320.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-239115 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-239115 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (5m20.560595266s)

                                                
                                                
-- stdout --
	* [old-k8s-version-239115] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19355
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-239115" primary control-plane node in "old-k8s-version-239115" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 20:48:03.182230  180868 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:48:03.182484  180868 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:48:03.182493  180868 out.go:304] Setting ErrFile to fd 2...
	I0731 20:48:03.182497  180868 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:48:03.182682  180868 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 20:48:03.183246  180868 out.go:298] Setting JSON to false
	I0731 20:48:03.184271  180868 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9019,"bootTime":1722449864,"procs":274,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 20:48:03.184340  180868 start.go:139] virtualization: kvm guest
	I0731 20:48:03.186745  180868 out.go:177] * [old-k8s-version-239115] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 20:48:03.188384  180868 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 20:48:03.188389  180868 notify.go:220] Checking for updates...
	I0731 20:48:03.191232  180868 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 20:48:03.192691  180868 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 20:48:03.194206  180868 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 20:48:03.195616  180868 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 20:48:03.197076  180868 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 20:48:03.199090  180868 config.go:182] Loaded profile config "bridge-341849": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:48:03.199242  180868 config.go:182] Loaded profile config "enable-default-cni-341849": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:48:03.199380  180868 config.go:182] Loaded profile config "flannel-341849": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:48:03.199511  180868 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 20:48:03.238148  180868 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 20:48:03.239347  180868 start.go:297] selected driver: kvm2
	I0731 20:48:03.239365  180868 start.go:901] validating driver "kvm2" against <nil>
	I0731 20:48:03.239380  180868 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 20:48:03.240087  180868 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:48:03.240179  180868 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-121704/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 20:48:03.256180  180868 install.go:137] /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0731 20:48:03.256239  180868 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 20:48:03.256512  180868 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 20:48:03.256581  180868 cni.go:84] Creating CNI manager for ""
	I0731 20:48:03.256596  180868 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:48:03.256613  180868 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 20:48:03.256695  180868 start.go:340] cluster config:
	{Name:old-k8s-version-239115 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:48:03.256849  180868 iso.go:125] acquiring lock: {Name:mk69fc0fe37180b19ce91307b5e12ab2d0bd69fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:48:03.258748  180868 out.go:177] * Starting "old-k8s-version-239115" primary control-plane node in "old-k8s-version-239115" cluster
	I0731 20:48:03.259970  180868 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 20:48:03.260012  180868 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0731 20:48:03.260026  180868 cache.go:56] Caching tarball of preloaded images
	I0731 20:48:03.260106  180868 preload.go:172] Found /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 20:48:03.260119  180868 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0731 20:48:03.260218  180868 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/config.json ...
	I0731 20:48:03.260242  180868 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/config.json: {Name:mk18e6ee90358393c1765236e4370234f1775c75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:48:03.260387  180868 start.go:360] acquireMachinesLock for old-k8s-version-239115: {Name:mk0ee20c9dba367bb5e62f6affdfe6f589095d2a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 20:48:47.566264  180868 start.go:364] duration metric: took 44.305829613s to acquireMachinesLock for "old-k8s-version-239115"
	I0731 20:48:47.566338  180868 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-239115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 20:48:47.566433  180868 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 20:48:47.568910  180868 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 20:48:47.569102  180868 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:48:47.569162  180868 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:48:47.586187  180868 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36379
	I0731 20:48:47.586631  180868 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:48:47.587139  180868 main.go:141] libmachine: Using API Version  1
	I0731 20:48:47.587181  180868 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:48:47.587525  180868 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:48:47.587743  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetMachineName
	I0731 20:48:47.587910  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:48:47.588058  180868 start.go:159] libmachine.API.Create for "old-k8s-version-239115" (driver="kvm2")
	I0731 20:48:47.588088  180868 client.go:168] LocalClient.Create starting
	I0731 20:48:47.588122  180868 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem
	I0731 20:48:47.588161  180868 main.go:141] libmachine: Decoding PEM data...
	I0731 20:48:47.588177  180868 main.go:141] libmachine: Parsing certificate...
	I0731 20:48:47.588259  180868 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem
	I0731 20:48:47.588291  180868 main.go:141] libmachine: Decoding PEM data...
	I0731 20:48:47.588305  180868 main.go:141] libmachine: Parsing certificate...
	I0731 20:48:47.588336  180868 main.go:141] libmachine: Running pre-create checks...
	I0731 20:48:47.588356  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .PreCreateCheck
	I0731 20:48:47.588753  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetConfigRaw
	I0731 20:48:47.589199  180868 main.go:141] libmachine: Creating machine...
	I0731 20:48:47.589218  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .Create
	I0731 20:48:47.589382  180868 main.go:141] libmachine: (old-k8s-version-239115) Creating KVM machine...
	I0731 20:48:47.590643  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | found existing default KVM network
	I0731 20:48:47.591902  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:48:47.591744  182545 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:6d:4b:76} reservation:<nil>}
	I0731 20:48:47.592737  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:48:47.592635  182545 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:ed:c8:e9} reservation:<nil>}
	I0731 20:48:47.593688  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:48:47.593589  182545 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002ae960}
	I0731 20:48:47.593709  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | created network xml: 
	I0731 20:48:47.593719  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | <network>
	I0731 20:48:47.593727  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG |   <name>mk-old-k8s-version-239115</name>
	I0731 20:48:47.593737  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG |   <dns enable='no'/>
	I0731 20:48:47.593747  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG |   
	I0731 20:48:47.593762  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0731 20:48:47.593773  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG |     <dhcp>
	I0731 20:48:47.593787  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0731 20:48:47.593797  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG |     </dhcp>
	I0731 20:48:47.593813  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG |   </ip>
	I0731 20:48:47.593828  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG |   
	I0731 20:48:47.593841  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | </network>
	I0731 20:48:47.593849  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | 
	I0731 20:48:47.599805  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | trying to create private KVM network mk-old-k8s-version-239115 192.168.61.0/24...
	I0731 20:48:47.671786  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | private KVM network mk-old-k8s-version-239115 192.168.61.0/24 created
	I0731 20:48:47.671820  180868 main.go:141] libmachine: (old-k8s-version-239115) Setting up store path in /home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115 ...
	I0731 20:48:47.671851  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:48:47.671747  182545 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 20:48:47.671863  180868 main.go:141] libmachine: (old-k8s-version-239115) Building disk image from file:///home/jenkins/minikube-integration/19355-121704/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0731 20:48:47.671879  180868 main.go:141] libmachine: (old-k8s-version-239115) Downloading /home/jenkins/minikube-integration/19355-121704/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19355-121704/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0731 20:48:47.932403  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:48:47.932234  182545 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa...
	I0731 20:48:48.063956  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:48:48.063778  182545 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/old-k8s-version-239115.rawdisk...
	I0731 20:48:48.063998  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | Writing magic tar header
	I0731 20:48:48.064017  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | Writing SSH key tar header
	I0731 20:48:48.064031  180868 main.go:141] libmachine: (old-k8s-version-239115) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115 (perms=drwx------)
	I0731 20:48:48.064046  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:48:48.063901  182545 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115 ...
	I0731 20:48:48.064064  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115
	I0731 20:48:48.064074  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704/.minikube/machines
	I0731 20:48:48.064091  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 20:48:48.064105  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704
	I0731 20:48:48.064121  180868 main.go:141] libmachine: (old-k8s-version-239115) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704/.minikube/machines (perms=drwxr-xr-x)
	I0731 20:48:48.064134  180868 main.go:141] libmachine: (old-k8s-version-239115) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704/.minikube (perms=drwxr-xr-x)
	I0731 20:48:48.064141  180868 main.go:141] libmachine: (old-k8s-version-239115) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704 (perms=drwxrwxr-x)
	I0731 20:48:48.064148  180868 main.go:141] libmachine: (old-k8s-version-239115) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 20:48:48.064166  180868 main.go:141] libmachine: (old-k8s-version-239115) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 20:48:48.064177  180868 main.go:141] libmachine: (old-k8s-version-239115) Creating domain...
	I0731 20:48:48.064185  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 20:48:48.064192  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | Checking permissions on dir: /home/jenkins
	I0731 20:48:48.064199  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | Checking permissions on dir: /home
	I0731 20:48:48.064206  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | Skipping /home - not owner
	I0731 20:48:48.065523  180868 main.go:141] libmachine: (old-k8s-version-239115) define libvirt domain using xml: 
	I0731 20:48:48.065545  180868 main.go:141] libmachine: (old-k8s-version-239115) <domain type='kvm'>
	I0731 20:48:48.065555  180868 main.go:141] libmachine: (old-k8s-version-239115)   <name>old-k8s-version-239115</name>
	I0731 20:48:48.065563  180868 main.go:141] libmachine: (old-k8s-version-239115)   <memory unit='MiB'>2200</memory>
	I0731 20:48:48.065572  180868 main.go:141] libmachine: (old-k8s-version-239115)   <vcpu>2</vcpu>
	I0731 20:48:48.065579  180868 main.go:141] libmachine: (old-k8s-version-239115)   <features>
	I0731 20:48:48.065587  180868 main.go:141] libmachine: (old-k8s-version-239115)     <acpi/>
	I0731 20:48:48.065596  180868 main.go:141] libmachine: (old-k8s-version-239115)     <apic/>
	I0731 20:48:48.065603  180868 main.go:141] libmachine: (old-k8s-version-239115)     <pae/>
	I0731 20:48:48.065608  180868 main.go:141] libmachine: (old-k8s-version-239115)     
	I0731 20:48:48.065629  180868 main.go:141] libmachine: (old-k8s-version-239115)   </features>
	I0731 20:48:48.065643  180868 main.go:141] libmachine: (old-k8s-version-239115)   <cpu mode='host-passthrough'>
	I0731 20:48:48.065667  180868 main.go:141] libmachine: (old-k8s-version-239115)   
	I0731 20:48:48.065677  180868 main.go:141] libmachine: (old-k8s-version-239115)   </cpu>
	I0731 20:48:48.065686  180868 main.go:141] libmachine: (old-k8s-version-239115)   <os>
	I0731 20:48:48.065693  180868 main.go:141] libmachine: (old-k8s-version-239115)     <type>hvm</type>
	I0731 20:48:48.065702  180868 main.go:141] libmachine: (old-k8s-version-239115)     <boot dev='cdrom'/>
	I0731 20:48:48.065714  180868 main.go:141] libmachine: (old-k8s-version-239115)     <boot dev='hd'/>
	I0731 20:48:48.065723  180868 main.go:141] libmachine: (old-k8s-version-239115)     <bootmenu enable='no'/>
	I0731 20:48:48.065733  180868 main.go:141] libmachine: (old-k8s-version-239115)   </os>
	I0731 20:48:48.065742  180868 main.go:141] libmachine: (old-k8s-version-239115)   <devices>
	I0731 20:48:48.065753  180868 main.go:141] libmachine: (old-k8s-version-239115)     <disk type='file' device='cdrom'>
	I0731 20:48:48.065890  180868 main.go:141] libmachine: (old-k8s-version-239115)       <source file='/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/boot2docker.iso'/>
	I0731 20:48:48.065924  180868 main.go:141] libmachine: (old-k8s-version-239115)       <target dev='hdc' bus='scsi'/>
	I0731 20:48:48.065945  180868 main.go:141] libmachine: (old-k8s-version-239115)       <readonly/>
	I0731 20:48:48.065957  180868 main.go:141] libmachine: (old-k8s-version-239115)     </disk>
	I0731 20:48:48.065968  180868 main.go:141] libmachine: (old-k8s-version-239115)     <disk type='file' device='disk'>
	I0731 20:48:48.065982  180868 main.go:141] libmachine: (old-k8s-version-239115)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 20:48:48.066002  180868 main.go:141] libmachine: (old-k8s-version-239115)       <source file='/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/old-k8s-version-239115.rawdisk'/>
	I0731 20:48:48.066018  180868 main.go:141] libmachine: (old-k8s-version-239115)       <target dev='hda' bus='virtio'/>
	I0731 20:48:48.066040  180868 main.go:141] libmachine: (old-k8s-version-239115)     </disk>
	I0731 20:48:48.066056  180868 main.go:141] libmachine: (old-k8s-version-239115)     <interface type='network'>
	I0731 20:48:48.066065  180868 main.go:141] libmachine: (old-k8s-version-239115)       <source network='mk-old-k8s-version-239115'/>
	I0731 20:48:48.066076  180868 main.go:141] libmachine: (old-k8s-version-239115)       <model type='virtio'/>
	I0731 20:48:48.066108  180868 main.go:141] libmachine: (old-k8s-version-239115)     </interface>
	I0731 20:48:48.066139  180868 main.go:141] libmachine: (old-k8s-version-239115)     <interface type='network'>
	I0731 20:48:48.066154  180868 main.go:141] libmachine: (old-k8s-version-239115)       <source network='default'/>
	I0731 20:48:48.066166  180868 main.go:141] libmachine: (old-k8s-version-239115)       <model type='virtio'/>
	I0731 20:48:48.066190  180868 main.go:141] libmachine: (old-k8s-version-239115)     </interface>
	I0731 20:48:48.066203  180868 main.go:141] libmachine: (old-k8s-version-239115)     <serial type='pty'>
	I0731 20:48:48.066227  180868 main.go:141] libmachine: (old-k8s-version-239115)       <target port='0'/>
	I0731 20:48:48.066246  180868 main.go:141] libmachine: (old-k8s-version-239115)     </serial>
	I0731 20:48:48.066259  180868 main.go:141] libmachine: (old-k8s-version-239115)     <console type='pty'>
	I0731 20:48:48.066270  180868 main.go:141] libmachine: (old-k8s-version-239115)       <target type='serial' port='0'/>
	I0731 20:48:48.066279  180868 main.go:141] libmachine: (old-k8s-version-239115)     </console>
	I0731 20:48:48.066288  180868 main.go:141] libmachine: (old-k8s-version-239115)     <rng model='virtio'>
	I0731 20:48:48.066298  180868 main.go:141] libmachine: (old-k8s-version-239115)       <backend model='random'>/dev/random</backend>
	I0731 20:48:48.066307  180868 main.go:141] libmachine: (old-k8s-version-239115)     </rng>
	I0731 20:48:48.066314  180868 main.go:141] libmachine: (old-k8s-version-239115)     
	I0731 20:48:48.066324  180868 main.go:141] libmachine: (old-k8s-version-239115)     
	I0731 20:48:48.066332  180868 main.go:141] libmachine: (old-k8s-version-239115)   </devices>
	I0731 20:48:48.066342  180868 main.go:141] libmachine: (old-k8s-version-239115) </domain>
	I0731 20:48:48.066354  180868 main.go:141] libmachine: (old-k8s-version-239115) 
	I0731 20:48:48.071100  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:88:44:fd in network default
	I0731 20:48:48.071785  180868 main.go:141] libmachine: (old-k8s-version-239115) Ensuring networks are active...
	I0731 20:48:48.071810  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:48:48.072592  180868 main.go:141] libmachine: (old-k8s-version-239115) Ensuring network default is active
	I0731 20:48:48.072913  180868 main.go:141] libmachine: (old-k8s-version-239115) Ensuring network mk-old-k8s-version-239115 is active
	I0731 20:48:48.073454  180868 main.go:141] libmachine: (old-k8s-version-239115) Getting domain xml...
	I0731 20:48:48.074176  180868 main.go:141] libmachine: (old-k8s-version-239115) Creating domain...
	I0731 20:48:49.488996  180868 main.go:141] libmachine: (old-k8s-version-239115) Waiting to get IP...
	I0731 20:48:49.489793  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:48:49.490343  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:48:49.490407  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:48:49.490323  182545 retry.go:31] will retry after 275.883979ms: waiting for machine to come up
	I0731 20:48:49.768116  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:48:49.768692  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:48:49.768720  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:48:49.768637  182545 retry.go:31] will retry after 279.063587ms: waiting for machine to come up
	I0731 20:48:50.049035  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:48:50.049660  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:48:50.049690  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:48:50.049611  182545 retry.go:31] will retry after 326.399608ms: waiting for machine to come up
	I0731 20:48:50.378022  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:48:50.378696  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:48:50.378726  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:48:50.378661  182545 retry.go:31] will retry after 499.056133ms: waiting for machine to come up
	I0731 20:48:50.879306  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:48:50.879726  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:48:50.879752  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:48:50.879676  182545 retry.go:31] will retry after 605.727662ms: waiting for machine to come up
	I0731 20:48:51.486746  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:48:51.487275  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:48:51.487303  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:48:51.487231  182545 retry.go:31] will retry after 820.139591ms: waiting for machine to come up
	I0731 20:48:52.309453  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:48:52.309983  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:48:52.310005  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:48:52.309910  182545 retry.go:31] will retry after 935.415763ms: waiting for machine to come up
	I0731 20:48:53.247658  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:48:53.248184  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:48:53.248207  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:48:53.248116  182545 retry.go:31] will retry after 977.024217ms: waiting for machine to come up
	I0731 20:48:54.227117  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:48:54.227585  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:48:54.227626  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:48:54.227532  182545 retry.go:31] will retry after 1.55095842s: waiting for machine to come up
	I0731 20:48:55.780284  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:48:55.780752  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:48:55.780781  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:48:55.780725  182545 retry.go:31] will retry after 2.277142213s: waiting for machine to come up
	I0731 20:48:58.059835  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:48:58.060356  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:48:58.060385  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:48:58.060294  182545 retry.go:31] will retry after 2.184891535s: waiting for machine to come up
	I0731 20:49:00.246698  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:00.247237  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:49:00.247281  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:49:00.247189  182545 retry.go:31] will retry after 2.752490743s: waiting for machine to come up
	I0731 20:49:03.001602  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:03.002080  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:49:03.002121  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:49:03.002022  182545 retry.go:31] will retry after 4.306167039s: waiting for machine to come up
	I0731 20:49:07.313287  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:07.313835  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:49:07.313863  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:49:07.313789  182545 retry.go:31] will retry after 4.00724258s: waiting for machine to come up
	I0731 20:49:11.322498  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:11.323058  180868 main.go:141] libmachine: (old-k8s-version-239115) Found IP for machine: 192.168.61.51
	I0731 20:49:11.323086  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has current primary IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:11.323095  180868 main.go:141] libmachine: (old-k8s-version-239115) Reserving static IP address...
	I0731 20:49:11.323490  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-239115", mac: "52:54:00:5a:70:0d", ip: "192.168.61.51"} in network mk-old-k8s-version-239115
	I0731 20:49:11.399586  180868 main.go:141] libmachine: (old-k8s-version-239115) Reserved static IP address: 192.168.61.51
	I0731 20:49:11.399621  180868 main.go:141] libmachine: (old-k8s-version-239115) Waiting for SSH to be available...
	I0731 20:49:11.399632  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | Getting to WaitForSSH function...
	I0731 20:49:11.402443  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:11.402756  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115
	I0731 20:49:11.402795  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find defined IP address of network mk-old-k8s-version-239115 interface with MAC address 52:54:00:5a:70:0d
	I0731 20:49:11.402837  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | Using SSH client type: external
	I0731 20:49:11.402861  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa (-rw-------)
	I0731 20:49:11.402939  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:49:11.402962  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | About to run SSH command:
	I0731 20:49:11.402975  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | exit 0
	I0731 20:49:11.406843  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | SSH cmd err, output: exit status 255: 
	I0731 20:49:11.406869  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0731 20:49:11.406880  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | command : exit 0
	I0731 20:49:11.406889  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | err     : exit status 255
	I0731 20:49:11.406900  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | output  : 
	I0731 20:49:14.407861  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | Getting to WaitForSSH function...
	I0731 20:49:14.410154  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:14.410543  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:49:04 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:49:14.410569  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:14.410695  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | Using SSH client type: external
	I0731 20:49:14.410723  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa (-rw-------)
	I0731 20:49:14.410751  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:49:14.410767  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | About to run SSH command:
	I0731 20:49:14.410782  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | exit 0
	I0731 20:49:14.533713  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | SSH cmd err, output: <nil>: 
	I0731 20:49:14.533969  180868 main.go:141] libmachine: (old-k8s-version-239115) KVM machine creation complete!
	I0731 20:49:14.534257  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetConfigRaw
	I0731 20:49:14.534820  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:49:14.535016  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:49:14.535174  180868 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 20:49:14.535191  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetState
	I0731 20:49:14.536423  180868 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 20:49:14.536440  180868 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 20:49:14.536448  180868 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 20:49:14.536457  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:49:14.539053  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:14.539556  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:49:04 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:49:14.539586  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:14.539772  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:49:14.539948  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:49:14.540168  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:49:14.540300  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:49:14.540467  180868 main.go:141] libmachine: Using SSH client type: native
	I0731 20:49:14.540675  180868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:49:14.540688  180868 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 20:49:14.640810  180868 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:49:14.640847  180868 main.go:141] libmachine: Detecting the provisioner...
	I0731 20:49:14.640860  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:49:14.644046  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:14.644429  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:49:04 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:49:14.644476  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:14.644595  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:49:14.644786  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:49:14.644941  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:49:14.645098  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:49:14.645289  180868 main.go:141] libmachine: Using SSH client type: native
	I0731 20:49:14.645537  180868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:49:14.645552  180868 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 20:49:14.746678  180868 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 20:49:14.746821  180868 main.go:141] libmachine: found compatible host: buildroot
	I0731 20:49:14.746837  180868 main.go:141] libmachine: Provisioning with buildroot...
	I0731 20:49:14.746845  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetMachineName
	I0731 20:49:14.747094  180868 buildroot.go:166] provisioning hostname "old-k8s-version-239115"
	I0731 20:49:14.747120  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetMachineName
	I0731 20:49:14.747342  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:49:14.750149  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:14.750647  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:49:04 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:49:14.750675  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:14.750959  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:49:14.751157  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:49:14.751316  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:49:14.751450  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:49:14.751606  180868 main.go:141] libmachine: Using SSH client type: native
	I0731 20:49:14.751829  180868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:49:14.751843  180868 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-239115 && echo "old-k8s-version-239115" | sudo tee /etc/hostname
	I0731 20:49:14.868815  180868 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-239115
	
	I0731 20:49:14.868848  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:49:14.871854  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:14.872149  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:49:04 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:49:14.872197  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:14.872334  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:49:14.872575  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:49:14.872730  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:49:14.872898  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:49:14.873057  180868 main.go:141] libmachine: Using SSH client type: native
	I0731 20:49:14.873277  180868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:49:14.873296  180868 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-239115' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-239115/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-239115' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:49:14.983434  180868 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:49:14.983469  180868 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 20:49:14.983511  180868 buildroot.go:174] setting up certificates
	I0731 20:49:14.983524  180868 provision.go:84] configureAuth start
	I0731 20:49:14.983539  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetMachineName
	I0731 20:49:14.983887  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetIP
	I0731 20:49:14.986591  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:14.986896  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:49:04 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:49:14.986928  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:14.987059  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:49:14.989183  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:14.989538  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:49:04 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:49:14.989577  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:14.989711  180868 provision.go:143] copyHostCerts
	I0731 20:49:14.989773  180868 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 20:49:14.989789  180868 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 20:49:14.989843  180868 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 20:49:14.989926  180868 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 20:49:14.989934  180868 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 20:49:14.989953  180868 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 20:49:14.990005  180868 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 20:49:14.990013  180868 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 20:49:14.990029  180868 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 20:49:14.990111  180868 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-239115 san=[127.0.0.1 192.168.61.51 localhost minikube old-k8s-version-239115]
	I0731 20:49:15.170692  180868 provision.go:177] copyRemoteCerts
	I0731 20:49:15.170761  180868 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:49:15.170786  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:49:15.173594  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:15.173971  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:49:04 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:49:15.174003  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:15.174246  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:49:15.174456  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:49:15.174639  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:49:15.174770  180868 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa Username:docker}
	I0731 20:49:15.256125  180868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 20:49:15.283580  180868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:49:15.309503  180868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0731 20:49:15.335838  180868 provision.go:87] duration metric: took 352.29786ms to configureAuth
	I0731 20:49:15.335880  180868 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:49:15.336131  180868 config.go:182] Loaded profile config "old-k8s-version-239115": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 20:49:15.336283  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:49:15.339328  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:15.339794  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:49:04 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:49:15.339824  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:15.340037  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:49:15.340232  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:49:15.340407  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:49:15.340551  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:49:15.340825  180868 main.go:141] libmachine: Using SSH client type: native
	I0731 20:49:15.341012  180868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:49:15.341029  180868 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:49:15.620211  180868 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:49:15.620251  180868 main.go:141] libmachine: Checking connection to Docker...
	I0731 20:49:15.620264  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetURL
	I0731 20:49:15.621625  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | Using libvirt version 6000000
	I0731 20:49:15.623988  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:15.624294  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:49:04 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:49:15.624324  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:15.624470  180868 main.go:141] libmachine: Docker is up and running!
	I0731 20:49:15.624487  180868 main.go:141] libmachine: Reticulating splines...
	I0731 20:49:15.624496  180868 client.go:171] duration metric: took 28.036396515s to LocalClient.Create
	I0731 20:49:15.624523  180868 start.go:167] duration metric: took 28.036467385s to libmachine.API.Create "old-k8s-version-239115"
	I0731 20:49:15.624546  180868 start.go:293] postStartSetup for "old-k8s-version-239115" (driver="kvm2")
	I0731 20:49:15.624560  180868 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:49:15.624584  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:49:15.624892  180868 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:49:15.624924  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:49:15.627101  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:15.627398  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:49:04 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:49:15.627427  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:15.627554  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:49:15.627752  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:49:15.627921  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:49:15.628085  180868 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa Username:docker}
	I0731 20:49:15.708166  180868 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:49:15.713088  180868 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:49:15.713117  180868 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 20:49:15.713202  180868 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 20:49:15.713303  180868 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 20:49:15.713453  180868 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:49:15.723940  180868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:49:15.754576  180868 start.go:296] duration metric: took 130.011597ms for postStartSetup
	I0731 20:49:15.754643  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetConfigRaw
	I0731 20:49:15.755301  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetIP
	I0731 20:49:15.758245  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:15.758625  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:49:04 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:49:15.758658  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:15.758970  180868 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/config.json ...
	I0731 20:49:15.759228  180868 start.go:128] duration metric: took 28.192781051s to createHost
	I0731 20:49:15.759256  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:49:15.761496  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:15.761803  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:49:04 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:49:15.761831  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:15.761936  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:49:15.762113  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:49:15.762271  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:49:15.762381  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:49:15.762500  180868 main.go:141] libmachine: Using SSH client type: native
	I0731 20:49:15.762669  180868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:49:15.762679  180868 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0731 20:49:15.862275  180868 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722458955.839039667
	
	I0731 20:49:15.862298  180868 fix.go:216] guest clock: 1722458955.839039667
	I0731 20:49:15.862305  180868 fix.go:229] Guest: 2024-07-31 20:49:15.839039667 +0000 UTC Remote: 2024-07-31 20:49:15.759241805 +0000 UTC m=+72.612524077 (delta=79.797862ms)
	I0731 20:49:15.862325  180868 fix.go:200] guest clock delta is within tolerance: 79.797862ms
	I0731 20:49:15.862331  180868 start.go:83] releasing machines lock for "old-k8s-version-239115", held for 28.296041184s
	I0731 20:49:15.862353  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:49:15.862658  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetIP
	I0731 20:49:15.865581  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:15.865969  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:49:04 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:49:15.866013  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:15.866185  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:49:15.866765  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:49:15.866972  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:49:15.867074  180868 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:49:15.867120  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:49:15.867245  180868 ssh_runner.go:195] Run: cat /version.json
	I0731 20:49:15.867270  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:49:15.869970  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:15.870338  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:49:04 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:49:15.870410  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:15.870443  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:15.870569  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:49:15.870741  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:49:15.870888  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:49:04 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:49:15.870912  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:15.870923  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:49:15.871088  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:49:15.871098  180868 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa Username:docker}
	I0731 20:49:15.871254  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:49:15.871403  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:49:15.871551  180868 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa Username:docker}
	I0731 20:49:15.950965  180868 ssh_runner.go:195] Run: systemctl --version
	I0731 20:49:15.979478  180868 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:49:16.157005  180868 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:49:16.164075  180868 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:49:16.164155  180868 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:49:16.182439  180868 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:49:16.182480  180868 start.go:495] detecting cgroup driver to use...
	I0731 20:49:16.182553  180868 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:49:16.206510  180868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:49:16.221774  180868 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:49:16.221846  180868 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:49:16.242932  180868 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:49:16.260890  180868 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:49:16.400206  180868 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:49:16.568845  180868 docker.go:233] disabling docker service ...
	I0731 20:49:16.568903  180868 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:49:16.585322  180868 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:49:16.598790  180868 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:49:16.739005  180868 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:49:16.866788  180868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:49:16.881945  180868 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:49:16.902431  180868 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 20:49:16.902497  180868 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:49:16.912716  180868 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:49:16.912790  180868 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:49:16.923492  180868 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:49:16.934312  180868 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:49:16.945068  180868 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:49:16.958577  180868 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:49:16.970338  180868 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:49:16.970403  180868 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:49:16.983942  180868 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:49:16.993979  180868 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:49:17.115305  180868 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:49:17.296878  180868 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:49:17.296970  180868 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:49:17.301806  180868 start.go:563] Will wait 60s for crictl version
	I0731 20:49:17.301868  180868 ssh_runner.go:195] Run: which crictl
	I0731 20:49:17.306029  180868 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:49:17.359143  180868 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:49:17.359224  180868 ssh_runner.go:195] Run: crio --version
	I0731 20:49:17.389713  180868 ssh_runner.go:195] Run: crio --version
	I0731 20:49:17.425359  180868 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0731 20:49:17.426668  180868 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetIP
	I0731 20:49:17.430106  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:17.430547  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:49:04 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:49:17.430578  180868 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:49:17.430815  180868 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0731 20:49:17.435123  180868 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:49:17.449161  180868 kubeadm.go:883] updating cluster {Name:old-k8s-version-239115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-239115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:49:17.449278  180868 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 20:49:17.449357  180868 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:49:17.485042  180868 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 20:49:17.485115  180868 ssh_runner.go:195] Run: which lz4
	I0731 20:49:17.490267  180868 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0731 20:49:17.495397  180868 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 20:49:17.495432  180868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0731 20:49:19.321488  180868 crio.go:462] duration metric: took 1.831258201s to copy over tarball
	I0731 20:49:19.321588  180868 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 20:49:22.719379  180868 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.397721815s)
	I0731 20:49:22.719408  180868 crio.go:469] duration metric: took 3.397874265s to extract the tarball
	I0731 20:49:22.719418  180868 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 20:49:22.767503  180868 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:49:22.866995  180868 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 20:49:22.867026  180868 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 20:49:22.867128  180868 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:49:22.867130  180868 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:49:22.867187  180868 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:49:22.867195  180868 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0731 20:49:22.867159  180868 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0731 20:49:22.867223  180868 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:49:22.867235  180868 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:49:22.867137  180868 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0731 20:49:22.869130  180868 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 20:49:22.869157  180868 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:49:22.869157  180868 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:49:22.869186  180868 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:49:22.869197  180868 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0731 20:49:22.869221  180868 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:49:22.869236  180868 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0731 20:49:22.869245  180868 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:49:23.021261  180868 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:49:23.059839  180868 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0731 20:49:23.061711  180868 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0731 20:49:23.061763  180868 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:49:23.061818  180868 ssh_runner.go:195] Run: which crictl
	I0731 20:49:23.087957  180868 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0731 20:49:23.096075  180868 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:49:23.108858  180868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:49:23.108949  180868 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0731 20:49:23.108990  180868 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0731 20:49:23.109032  180868 ssh_runner.go:195] Run: which crictl
	I0731 20:49:23.190652  180868 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0731 20:49:23.190708  180868 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0731 20:49:23.190763  180868 ssh_runner.go:195] Run: which crictl
	I0731 20:49:23.199604  180868 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0731 20:49:23.199646  180868 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:49:23.199695  180868 ssh_runner.go:195] Run: which crictl
	I0731 20:49:23.199705  180868 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0731 20:49:23.199808  180868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 20:49:23.199836  180868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0731 20:49:23.209291  180868 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:49:23.209604  180868 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:49:23.215217  180868 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0731 20:49:23.287502  180868 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0731 20:49:23.287617  180868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:49:23.317718  180868 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0731 20:49:23.346917  180868 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0731 20:49:23.346992  180868 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:49:23.347047  180868 ssh_runner.go:195] Run: which crictl
	I0731 20:49:23.368821  180868 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0731 20:49:23.368865  180868 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:49:23.368914  180868 ssh_runner.go:195] Run: which crictl
	I0731 20:49:23.373798  180868 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0731 20:49:23.373842  180868 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0731 20:49:23.373886  180868 ssh_runner.go:195] Run: which crictl
	I0731 20:49:23.378512  180868 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0731 20:49:23.378585  180868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:49:23.381379  180868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0731 20:49:23.381481  180868 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:49:23.426272  180868 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0731 20:49:23.446002  180868 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0731 20:49:23.454381  180868 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0731 20:49:23.778486  180868 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:49:23.922158  180868 cache_images.go:92] duration metric: took 1.055111179s to LoadCachedImages
	W0731 20:49:23.922254  180868 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0731 20:49:23.922267  180868 kubeadm.go:934] updating node { 192.168.61.51 8443 v1.20.0 crio true true} ...
	I0731 20:49:23.922404  180868 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-239115 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:49:23.922511  180868 ssh_runner.go:195] Run: crio config
	I0731 20:49:23.992620  180868 cni.go:84] Creating CNI manager for ""
	I0731 20:49:23.992646  180868 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:49:23.992659  180868 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:49:23.992685  180868 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-239115 NodeName:old-k8s-version-239115 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 20:49:23.992964  180868 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-239115"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:49:23.993061  180868 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0731 20:49:24.007236  180868 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:49:24.007311  180868 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 20:49:24.017859  180868 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0731 20:49:24.041131  180868 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:49:24.060443  180868 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0731 20:49:24.081858  180868 ssh_runner.go:195] Run: grep 192.168.61.51	control-plane.minikube.internal$ /etc/hosts
	I0731 20:49:24.087325  180868 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:49:24.104843  180868 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:49:24.268405  180868 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:49:24.292251  180868 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115 for IP: 192.168.61.51
	I0731 20:49:24.292278  180868 certs.go:194] generating shared ca certs ...
	I0731 20:49:24.292300  180868 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:49:24.292560  180868 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 20:49:24.292645  180868 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 20:49:24.292661  180868 certs.go:256] generating profile certs ...
	I0731 20:49:24.292750  180868 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/client.key
	I0731 20:49:24.292769  180868 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/client.crt with IP's: []
	I0731 20:49:24.588977  180868 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/client.crt ...
	I0731 20:49:24.589017  180868 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/client.crt: {Name:mk31ffb3d18c5f0788419774544bcda595753544 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:49:24.589259  180868 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/client.key ...
	I0731 20:49:24.589281  180868 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/client.key: {Name:mk7cc49529eb8756b1cb804c6c11337d96a036db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:49:24.589423  180868 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/apiserver.key.072d7f83
	I0731 20:49:24.589446  180868 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/apiserver.crt.072d7f83 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.51]
	I0731 20:49:24.950738  180868 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/apiserver.crt.072d7f83 ...
	I0731 20:49:24.950772  180868 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/apiserver.crt.072d7f83: {Name:mk1267c33021ce69fcd1f868be09b05d8c4557e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:49:24.950948  180868 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/apiserver.key.072d7f83 ...
	I0731 20:49:24.950964  180868 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/apiserver.key.072d7f83: {Name:mk696a9938beae2f47fa52aa0f4e064bccb90f02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:49:24.951031  180868 certs.go:381] copying /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/apiserver.crt.072d7f83 -> /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/apiserver.crt
	I0731 20:49:24.951099  180868 certs.go:385] copying /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/apiserver.key.072d7f83 -> /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/apiserver.key
	I0731 20:49:24.951154  180868 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/proxy-client.key
	I0731 20:49:24.951169  180868 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/proxy-client.crt with IP's: []
	I0731 20:49:25.113577  180868 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/proxy-client.crt ...
	I0731 20:49:25.113605  180868 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/proxy-client.crt: {Name:mk26293cb994c024f9dced96ccd471e68c5b5853 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:49:25.113789  180868 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/proxy-client.key ...
	I0731 20:49:25.113807  180868 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/proxy-client.key: {Name:mk7e1f36ba276800d6a1fe2a0c02ec2a4840a84e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:49:25.114041  180868 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 20:49:25.114084  180868 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 20:49:25.114095  180868 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 20:49:25.114117  180868 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:49:25.114139  180868 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:49:25.114160  180868 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 20:49:25.114197  180868 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:49:25.114744  180868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:49:25.153524  180868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:49:25.183001  180868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:49:25.214008  180868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 20:49:25.246071  180868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 20:49:25.275789  180868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 20:49:25.306971  180868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:49:25.345543  180868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 20:49:25.371632  180868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 20:49:25.399333  180868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:49:25.430873  180868 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 20:49:25.457316  180868 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:49:25.477205  180868 ssh_runner.go:195] Run: openssl version
	I0731 20:49:25.485518  180868 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 20:49:25.500541  180868 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 20:49:25.506466  180868 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 20:49:25.506535  180868 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 20:49:25.514922  180868 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:49:25.533045  180868 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:49:25.545612  180868 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:49:25.552926  180868 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:49:25.553075  180868 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:49:25.559850  180868 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:49:25.573668  180868 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 20:49:25.588905  180868 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 20:49:25.595050  180868 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 20:49:25.595122  180868 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 20:49:25.601308  180868 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 20:49:25.615905  180868 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:49:25.621440  180868 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 20:49:25.621501  180868 kubeadm.go:392] StartCluster: {Name:old-k8s-version-239115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-239115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:49:25.621598  180868 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:49:25.621661  180868 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:49:25.666334  180868 cri.go:89] found id: ""
	I0731 20:49:25.666405  180868 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 20:49:25.676565  180868 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 20:49:25.686598  180868 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:49:25.696479  180868 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:49:25.696504  180868 kubeadm.go:157] found existing configuration files:
	
	I0731 20:49:25.696562  180868 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 20:49:25.705862  180868 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:49:25.705921  180868 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:49:25.716071  180868 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 20:49:25.727150  180868 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:49:25.727222  180868 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:49:25.740558  180868 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 20:49:25.751029  180868 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:49:25.751095  180868 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:49:25.760898  180868 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 20:49:25.770187  180868 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:49:25.770280  180868 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:49:25.780324  180868 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 20:49:25.908332  180868 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 20:49:25.908389  180868 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 20:49:26.121653  180868 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 20:49:26.121798  180868 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 20:49:26.121920  180868 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 20:49:26.406879  180868 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 20:49:26.409794  180868 out.go:204]   - Generating certificates and keys ...
	I0731 20:49:26.409870  180868 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 20:49:26.409925  180868 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 20:49:26.534002  180868 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 20:49:26.656899  180868 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 20:49:27.321362  180868 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 20:49:27.902952  180868 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 20:49:28.166936  180868 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 20:49:28.167115  180868 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-239115] and IPs [192.168.61.51 127.0.0.1 ::1]
	I0731 20:49:28.665588  180868 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 20:49:28.665877  180868 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-239115] and IPs [192.168.61.51 127.0.0.1 ::1]
	I0731 20:49:28.752233  180868 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 20:49:28.807285  180868 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 20:49:29.256440  180868 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 20:49:29.256893  180868 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 20:49:29.384639  180868 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 20:49:29.483424  180868 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 20:49:29.838468  180868 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 20:49:30.155030  180868 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 20:49:30.177979  180868 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 20:49:30.178879  180868 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 20:49:30.179112  180868 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 20:49:30.363130  180868 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 20:49:30.365106  180868 out.go:204]   - Booting up control plane ...
	I0731 20:49:30.365224  180868 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 20:49:30.376451  180868 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 20:49:30.381712  180868 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 20:49:30.381822  180868 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 20:49:30.392486  180868 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 20:50:10.389392  180868 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 20:50:10.390281  180868 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 20:50:10.390544  180868 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 20:50:15.390236  180868 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 20:50:15.390461  180868 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 20:50:25.390205  180868 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 20:50:25.390456  180868 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 20:50:45.389885  180868 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 20:50:45.390119  180868 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 20:51:25.392273  180868 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 20:51:25.392835  180868 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 20:51:25.392857  180868 kubeadm.go:310] 
	I0731 20:51:25.392953  180868 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 20:51:25.393045  180868 kubeadm.go:310] 		timed out waiting for the condition
	I0731 20:51:25.393053  180868 kubeadm.go:310] 
	I0731 20:51:25.393117  180868 kubeadm.go:310] 	This error is likely caused by:
	I0731 20:51:25.393195  180868 kubeadm.go:310] 		- The kubelet is not running
	I0731 20:51:25.393519  180868 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 20:51:25.393554  180868 kubeadm.go:310] 
	I0731 20:51:25.393778  180868 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 20:51:25.393856  180868 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 20:51:25.393929  180868 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 20:51:25.393939  180868 kubeadm.go:310] 
	I0731 20:51:25.394181  180868 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 20:51:25.394367  180868 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 20:51:25.394374  180868 kubeadm.go:310] 
	I0731 20:51:25.394607  180868 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 20:51:25.394795  180868 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 20:51:25.395832  180868 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 20:51:25.396176  180868 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 20:51:25.396211  180868 kubeadm.go:310] 
	I0731 20:51:25.396366  180868 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 20:51:25.396472  180868 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	W0731 20:51:25.396663  180868 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-239115] and IPs [192.168.61.51 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-239115] and IPs [192.168.61.51 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-239115] and IPs [192.168.61.51 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-239115] and IPs [192.168.61.51 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0731 20:51:25.396718  180868 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 20:51:25.396670  180868 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 20:51:26.515397  180868 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.118652718s)
	I0731 20:51:26.515470  180868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:51:26.530145  180868 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:51:26.539834  180868 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:51:26.539860  180868 kubeadm.go:157] found existing configuration files:
	
	I0731 20:51:26.539903  180868 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 20:51:26.549303  180868 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:51:26.549377  180868 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:51:26.558372  180868 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 20:51:26.567508  180868 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:51:26.567568  180868 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:51:26.576363  180868 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 20:51:26.585043  180868 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:51:26.585096  180868 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:51:26.594455  180868 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 20:51:26.603558  180868 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:51:26.603611  180868 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:51:26.612533  180868 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 20:51:26.687344  180868 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 20:51:26.687438  180868 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 20:51:26.833243  180868 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 20:51:26.833441  180868 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 20:51:26.833557  180868 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 20:51:27.027290  180868 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 20:51:27.029260  180868 out.go:204]   - Generating certificates and keys ...
	I0731 20:51:27.029400  180868 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 20:51:27.029485  180868 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 20:51:27.029589  180868 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 20:51:27.029696  180868 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 20:51:27.029809  180868 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 20:51:27.029886  180868 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 20:51:27.029973  180868 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 20:51:27.030625  180868 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 20:51:27.031592  180868 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 20:51:27.032608  180868 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 20:51:27.032951  180868 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 20:51:27.033041  180868 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 20:51:27.385389  180868 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 20:51:27.734973  180868 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 20:51:27.872101  180868 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 20:51:27.932272  180868 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 20:51:27.949087  180868 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 20:51:27.951595  180868 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 20:51:27.951650  180868 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 20:51:28.098197  180868 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 20:51:28.100353  180868 out.go:204]   - Booting up control plane ...
	I0731 20:51:28.100539  180868 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 20:51:28.113571  180868 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 20:51:28.115075  180868 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 20:51:28.117209  180868 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 20:51:28.120336  180868 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 20:52:08.123316  180868 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 20:52:08.123567  180868 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 20:52:08.123838  180868 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 20:52:13.124560  180868 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 20:52:13.124789  180868 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 20:52:23.124668  180868 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 20:52:23.124866  180868 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 20:52:43.123825  180868 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 20:52:43.124041  180868 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 20:53:23.123561  180868 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 20:53:23.123782  180868 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 20:53:23.123813  180868 kubeadm.go:310] 
	I0731 20:53:23.123915  180868 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 20:53:23.123977  180868 kubeadm.go:310] 		timed out waiting for the condition
	I0731 20:53:23.123989  180868 kubeadm.go:310] 
	I0731 20:53:23.124032  180868 kubeadm.go:310] 	This error is likely caused by:
	I0731 20:53:23.124083  180868 kubeadm.go:310] 		- The kubelet is not running
	I0731 20:53:23.124214  180868 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 20:53:23.124224  180868 kubeadm.go:310] 
	I0731 20:53:23.124347  180868 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 20:53:23.124396  180868 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 20:53:23.124434  180868 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 20:53:23.124448  180868 kubeadm.go:310] 
	I0731 20:53:23.124591  180868 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 20:53:23.124712  180868 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 20:53:23.124725  180868 kubeadm.go:310] 
	I0731 20:53:23.124878  180868 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 20:53:23.124989  180868 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 20:53:23.125099  180868 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 20:53:23.125197  180868 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 20:53:23.125210  180868 kubeadm.go:310] 
	I0731 20:53:23.125936  180868 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 20:53:23.126053  180868 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 20:53:23.126156  180868 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 20:53:23.126231  180868 kubeadm.go:394] duration metric: took 3m57.5047371s to StartCluster
	I0731 20:53:23.126276  180868 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 20:53:23.126325  180868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 20:53:23.169161  180868 cri.go:89] found id: ""
	I0731 20:53:23.169194  180868 logs.go:276] 0 containers: []
	W0731 20:53:23.169209  180868 logs.go:278] No container was found matching "kube-apiserver"
	I0731 20:53:23.169218  180868 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 20:53:23.169281  180868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 20:53:23.203823  180868 cri.go:89] found id: ""
	I0731 20:53:23.203853  180868 logs.go:276] 0 containers: []
	W0731 20:53:23.203860  180868 logs.go:278] No container was found matching "etcd"
	I0731 20:53:23.203866  180868 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 20:53:23.203917  180868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 20:53:23.237868  180868 cri.go:89] found id: ""
	I0731 20:53:23.237900  180868 logs.go:276] 0 containers: []
	W0731 20:53:23.237910  180868 logs.go:278] No container was found matching "coredns"
	I0731 20:53:23.237919  180868 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 20:53:23.237987  180868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 20:53:23.271146  180868 cri.go:89] found id: ""
	I0731 20:53:23.271180  180868 logs.go:276] 0 containers: []
	W0731 20:53:23.271192  180868 logs.go:278] No container was found matching "kube-scheduler"
	I0731 20:53:23.271200  180868 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 20:53:23.271266  180868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 20:53:23.308232  180868 cri.go:89] found id: ""
	I0731 20:53:23.308265  180868 logs.go:276] 0 containers: []
	W0731 20:53:23.308277  180868 logs.go:278] No container was found matching "kube-proxy"
	I0731 20:53:23.308285  180868 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 20:53:23.308343  180868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 20:53:23.342021  180868 cri.go:89] found id: ""
	I0731 20:53:23.342055  180868 logs.go:276] 0 containers: []
	W0731 20:53:23.342066  180868 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 20:53:23.342074  180868 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 20:53:23.342140  180868 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 20:53:23.376147  180868 cri.go:89] found id: ""
	I0731 20:53:23.376179  180868 logs.go:276] 0 containers: []
	W0731 20:53:23.376186  180868 logs.go:278] No container was found matching "kindnet"
	I0731 20:53:23.376197  180868 logs.go:123] Gathering logs for kubelet ...
	I0731 20:53:23.376210  180868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 20:53:23.426993  180868 logs.go:123] Gathering logs for dmesg ...
	I0731 20:53:23.427027  180868 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 20:53:23.441193  180868 logs.go:123] Gathering logs for describe nodes ...
	I0731 20:53:23.441223  180868 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 20:53:23.547960  180868 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 20:53:23.547985  180868 logs.go:123] Gathering logs for CRI-O ...
	I0731 20:53:23.548001  180868 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 20:53:23.645069  180868 logs.go:123] Gathering logs for container status ...
	I0731 20:53:23.645107  180868 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0731 20:53:23.690657  180868 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0731 20:53:23.690704  180868 out.go:239] * 
	* 
	W0731 20:53:23.690773  180868 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 20:53:23.690793  180868 out.go:239] * 
	* 
	W0731 20:53:23.691555  180868 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 20:53:23.694711  180868 out.go:177] 
	W0731 20:53:23.696265  180868 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 20:53:23.696325  180868 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0731 20:53:23.696348  180868 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0731 20:53:23.698023  180868 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-239115 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-239115 -n old-k8s-version-239115
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-239115 -n old-k8s-version-239115: exit status 6 (217.395629ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 20:53:23.956949  187576 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-239115" does not appear in /home/jenkins/minikube-integration/19355-121704/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-239115" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (320.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-831240 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-831240 --alsologtostderr -v=3: exit status 82 (2m0.528002867s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-831240"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 20:51:10.836670  186786 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:51:10.836805  186786 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:51:10.836817  186786 out.go:304] Setting ErrFile to fd 2...
	I0731 20:51:10.836824  186786 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:51:10.837103  186786 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 20:51:10.837457  186786 out.go:298] Setting JSON to false
	I0731 20:51:10.837564  186786 mustload.go:65] Loading cluster: embed-certs-831240
	I0731 20:51:10.838014  186786 config.go:182] Loaded profile config "embed-certs-831240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:51:10.838119  186786 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/config.json ...
	I0731 20:51:10.838343  186786 mustload.go:65] Loading cluster: embed-certs-831240
	I0731 20:51:10.838493  186786 config.go:182] Loaded profile config "embed-certs-831240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:51:10.838539  186786 stop.go:39] StopHost: embed-certs-831240
	I0731 20:51:10.839111  186786 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:51:10.839176  186786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:51:10.854181  186786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44339
	I0731 20:51:10.854747  186786 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:51:10.855383  186786 main.go:141] libmachine: Using API Version  1
	I0731 20:51:10.855408  186786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:51:10.855993  186786 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:51:10.858198  186786 out.go:177] * Stopping node "embed-certs-831240"  ...
	I0731 20:51:10.859682  186786 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0731 20:51:10.859732  186786 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:51:10.859989  186786 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0731 20:51:10.860022  186786 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:51:10.863384  186786 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:51:10.863914  186786 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:50:09 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:51:10.863953  186786 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:51:10.864224  186786 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:51:10.864406  186786 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:51:10.864588  186786 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:51:10.864774  186786 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 20:51:10.991035  186786 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0731 20:51:11.049554  186786 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0731 20:51:11.112867  186786 main.go:141] libmachine: Stopping "embed-certs-831240"...
	I0731 20:51:11.112901  186786 main.go:141] libmachine: (embed-certs-831240) Calling .GetState
	I0731 20:51:11.114859  186786 main.go:141] libmachine: (embed-certs-831240) Calling .Stop
	I0731 20:51:11.119320  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 0/120
	I0731 20:51:12.120780  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 1/120
	I0731 20:51:13.122193  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 2/120
	I0731 20:51:14.123901  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 3/120
	I0731 20:51:15.125317  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 4/120
	I0731 20:51:16.127025  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 5/120
	I0731 20:51:17.128522  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 6/120
	I0731 20:51:18.130009  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 7/120
	I0731 20:51:19.132124  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 8/120
	I0731 20:51:20.133422  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 9/120
	I0731 20:51:21.135438  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 10/120
	I0731 20:51:22.137099  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 11/120
	I0731 20:51:23.138481  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 12/120
	I0731 20:51:24.139889  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 13/120
	I0731 20:51:25.141420  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 14/120
	I0731 20:51:26.142837  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 15/120
	I0731 20:51:27.144242  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 16/120
	I0731 20:51:28.145660  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 17/120
	I0731 20:51:29.147069  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 18/120
	I0731 20:51:30.148411  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 19/120
	I0731 20:51:31.150260  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 20/120
	I0731 20:51:32.152046  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 21/120
	I0731 20:51:33.153571  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 22/120
	I0731 20:51:34.155815  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 23/120
	I0731 20:51:35.157101  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 24/120
	I0731 20:51:36.158943  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 25/120
	I0731 20:51:37.160070  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 26/120
	I0731 20:51:38.161389  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 27/120
	I0731 20:51:39.162648  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 28/120
	I0731 20:51:40.164352  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 29/120
	I0731 20:51:41.166616  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 30/120
	I0731 20:51:42.167960  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 31/120
	I0731 20:51:43.169550  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 32/120
	I0731 20:51:44.171958  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 33/120
	I0731 20:51:45.173534  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 34/120
	I0731 20:51:46.175488  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 35/120
	I0731 20:51:47.177328  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 36/120
	I0731 20:51:48.178667  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 37/120
	I0731 20:51:49.180451  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 38/120
	I0731 20:51:50.181986  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 39/120
	I0731 20:51:51.184444  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 40/120
	I0731 20:51:52.185964  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 41/120
	I0731 20:51:53.187750  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 42/120
	I0731 20:51:54.189408  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 43/120
	I0731 20:51:55.190798  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 44/120
	I0731 20:51:56.192719  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 45/120
	I0731 20:51:57.194151  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 46/120
	I0731 20:51:58.195306  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 47/120
	I0731 20:51:59.197166  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 48/120
	I0731 20:52:00.198484  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 49/120
	I0731 20:52:01.200519  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 50/120
	I0731 20:52:02.202053  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 51/120
	I0731 20:52:03.203420  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 52/120
	I0731 20:52:04.204818  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 53/120
	I0731 20:52:05.206188  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 54/120
	I0731 20:52:06.208081  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 55/120
	I0731 20:52:07.209453  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 56/120
	I0731 20:52:08.210969  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 57/120
	I0731 20:52:09.212431  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 58/120
	I0731 20:52:10.214106  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 59/120
	I0731 20:52:11.216439  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 60/120
	I0731 20:52:12.217863  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 61/120
	I0731 20:52:13.219788  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 62/120
	I0731 20:52:14.221086  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 63/120
	I0731 20:52:15.222493  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 64/120
	I0731 20:52:16.224657  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 65/120
	I0731 20:52:17.226157  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 66/120
	I0731 20:52:18.227509  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 67/120
	I0731 20:52:19.228948  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 68/120
	I0731 20:52:20.230275  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 69/120
	I0731 20:52:21.231857  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 70/120
	I0731 20:52:22.233121  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 71/120
	I0731 20:52:23.234570  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 72/120
	I0731 20:52:24.236043  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 73/120
	I0731 20:52:25.237332  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 74/120
	I0731 20:52:26.239550  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 75/120
	I0731 20:52:27.240972  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 76/120
	I0731 20:52:28.242412  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 77/120
	I0731 20:52:29.243776  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 78/120
	I0731 20:52:30.245269  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 79/120
	I0731 20:52:31.247542  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 80/120
	I0731 20:52:32.249295  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 81/120
	I0731 20:52:33.250778  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 82/120
	I0731 20:52:34.252296  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 83/120
	I0731 20:52:35.253726  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 84/120
	I0731 20:52:36.255729  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 85/120
	I0731 20:52:37.257169  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 86/120
	I0731 20:52:38.258646  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 87/120
	I0731 20:52:39.260501  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 88/120
	I0731 20:52:40.261961  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 89/120
	I0731 20:52:41.263976  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 90/120
	I0731 20:52:42.265235  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 91/120
	I0731 20:52:43.266556  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 92/120
	I0731 20:52:44.267873  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 93/120
	I0731 20:52:45.269260  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 94/120
	I0731 20:52:46.271335  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 95/120
	I0731 20:52:47.272786  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 96/120
	I0731 20:52:48.274316  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 97/120
	I0731 20:52:49.275665  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 98/120
	I0731 20:52:50.277031  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 99/120
	I0731 20:52:51.279100  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 100/120
	I0731 20:52:52.280642  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 101/120
	I0731 20:52:53.282034  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 102/120
	I0731 20:52:54.283488  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 103/120
	I0731 20:52:55.284806  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 104/120
	I0731 20:52:56.286721  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 105/120
	I0731 20:52:57.288193  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 106/120
	I0731 20:52:58.289538  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 107/120
	I0731 20:52:59.291024  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 108/120
	I0731 20:53:00.292478  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 109/120
	I0731 20:53:01.295006  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 110/120
	I0731 20:53:02.296321  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 111/120
	I0731 20:53:03.297650  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 112/120
	I0731 20:53:04.299277  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 113/120
	I0731 20:53:05.300541  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 114/120
	I0731 20:53:06.302428  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 115/120
	I0731 20:53:07.303899  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 116/120
	I0731 20:53:08.305035  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 117/120
	I0731 20:53:09.306626  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 118/120
	I0731 20:53:10.307964  186786 main.go:141] libmachine: (embed-certs-831240) Waiting for machine to stop 119/120
	I0731 20:53:11.309091  186786 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0731 20:53:11.309187  186786 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0731 20:53:11.311147  186786 out.go:177] 
	W0731 20:53:11.312449  186786 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0731 20:53:11.312464  186786 out.go:239] * 
	* 
	W0731 20:53:11.315377  186786 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 20:53:11.316821  186786 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-831240 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-831240 -n embed-certs-831240
E0731 20:53:11.908542  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/custom-flannel-341849/client.crt: no such file or directory
E0731 20:53:13.416516  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/enable-default-cni-341849/client.crt: no such file or directory
E0731 20:53:18.537712  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/enable-default-cni-341849/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-831240 -n embed-certs-831240: exit status 3 (18.515235715s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 20:53:29.833641  187493 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.92:22: connect: no route to host
	E0731 20:53:29.833669  187493 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.92:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-831240" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (138.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-916885 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-916885 --alsologtostderr -v=3: exit status 82 (2m0.487083363s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-916885"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 20:51:32.972656  187030 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:51:32.972906  187030 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:51:32.972916  187030 out.go:304] Setting ErrFile to fd 2...
	I0731 20:51:32.972921  187030 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:51:32.973090  187030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 20:51:32.973306  187030 out.go:298] Setting JSON to false
	I0731 20:51:32.973406  187030 mustload.go:65] Loading cluster: no-preload-916885
	I0731 20:51:32.973723  187030 config.go:182] Loaded profile config "no-preload-916885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 20:51:32.973787  187030 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/config.json ...
	I0731 20:51:32.973945  187030 mustload.go:65] Loading cluster: no-preload-916885
	I0731 20:51:32.974035  187030 config.go:182] Loaded profile config "no-preload-916885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 20:51:32.974066  187030 stop.go:39] StopHost: no-preload-916885
	I0731 20:51:32.974420  187030 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:51:32.974471  187030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:51:32.989476  187030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41209
	I0731 20:51:32.989998  187030 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:51:32.990637  187030 main.go:141] libmachine: Using API Version  1
	I0731 20:51:32.990661  187030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:51:32.991014  187030 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:51:32.993211  187030 out.go:177] * Stopping node "no-preload-916885"  ...
	I0731 20:51:32.994808  187030 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0731 20:51:32.994845  187030 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:51:32.995077  187030 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0731 20:51:32.995107  187030 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:51:32.997768  187030 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:51:32.998145  187030 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:49:33 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:51:32.998179  187030 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:51:32.998346  187030 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:51:32.998523  187030 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:51:32.998690  187030 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:51:32.998796  187030 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 20:51:33.084755  187030 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0731 20:51:33.147813  187030 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0731 20:51:33.215866  187030 main.go:141] libmachine: Stopping "no-preload-916885"...
	I0731 20:51:33.215898  187030 main.go:141] libmachine: (no-preload-916885) Calling .GetState
	I0731 20:51:33.217528  187030 main.go:141] libmachine: (no-preload-916885) Calling .Stop
	I0731 20:51:33.221110  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 0/120
	I0731 20:51:34.223033  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 1/120
	I0731 20:51:35.224529  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 2/120
	I0731 20:51:36.225865  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 3/120
	I0731 20:51:37.227794  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 4/120
	I0731 20:51:38.229766  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 5/120
	I0731 20:51:39.231801  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 6/120
	I0731 20:51:40.233195  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 7/120
	I0731 20:51:41.234622  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 8/120
	I0731 20:51:42.236012  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 9/120
	I0731 20:51:43.238337  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 10/120
	I0731 20:51:44.239794  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 11/120
	I0731 20:51:45.241201  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 12/120
	I0731 20:51:46.242786  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 13/120
	I0731 20:51:47.244257  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 14/120
	I0731 20:51:48.246446  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 15/120
	I0731 20:51:49.247666  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 16/120
	I0731 20:51:50.249231  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 17/120
	I0731 20:51:51.250484  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 18/120
	I0731 20:51:52.251729  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 19/120
	I0731 20:51:53.254121  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 20/120
	I0731 20:51:54.255589  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 21/120
	I0731 20:51:55.256858  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 22/120
	I0731 20:51:56.258337  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 23/120
	I0731 20:51:57.259587  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 24/120
	I0731 20:51:58.261392  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 25/120
	I0731 20:51:59.262747  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 26/120
	I0731 20:52:00.263965  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 27/120
	I0731 20:52:01.265425  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 28/120
	I0731 20:52:02.266771  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 29/120
	I0731 20:52:03.269048  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 30/120
	I0731 20:52:04.270399  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 31/120
	I0731 20:52:05.271774  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 32/120
	I0731 20:52:06.273032  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 33/120
	I0731 20:52:07.274517  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 34/120
	I0731 20:52:08.276506  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 35/120
	I0731 20:52:09.277966  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 36/120
	I0731 20:52:10.279933  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 37/120
	I0731 20:52:11.281192  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 38/120
	I0731 20:52:12.282349  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 39/120
	I0731 20:52:13.284669  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 40/120
	I0731 20:52:14.286123  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 41/120
	I0731 20:52:15.287569  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 42/120
	I0731 20:52:16.288888  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 43/120
	I0731 20:52:17.290370  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 44/120
	I0731 20:52:18.292466  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 45/120
	I0731 20:52:19.293896  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 46/120
	I0731 20:52:20.295170  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 47/120
	I0731 20:52:21.296466  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 48/120
	I0731 20:52:22.298094  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 49/120
	I0731 20:52:23.300207  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 50/120
	I0731 20:52:24.301613  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 51/120
	I0731 20:52:25.303175  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 52/120
	I0731 20:52:26.304468  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 53/120
	I0731 20:52:27.305847  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 54/120
	I0731 20:52:28.307906  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 55/120
	I0731 20:52:29.309325  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 56/120
	I0731 20:52:30.310793  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 57/120
	I0731 20:52:31.312177  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 58/120
	I0731 20:52:32.313585  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 59/120
	I0731 20:52:33.315234  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 60/120
	I0731 20:52:34.316636  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 61/120
	I0731 20:52:35.318038  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 62/120
	I0731 20:52:36.319480  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 63/120
	I0731 20:52:37.320748  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 64/120
	I0731 20:52:38.322714  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 65/120
	I0731 20:52:39.324897  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 66/120
	I0731 20:52:40.326362  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 67/120
	I0731 20:52:41.327792  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 68/120
	I0731 20:52:42.329130  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 69/120
	I0731 20:52:43.331293  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 70/120
	I0731 20:52:44.332645  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 71/120
	I0731 20:52:45.334042  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 72/120
	I0731 20:52:46.335306  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 73/120
	I0731 20:52:47.336723  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 74/120
	I0731 20:52:48.338712  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 75/120
	I0731 20:52:49.340120  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 76/120
	I0731 20:52:50.341499  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 77/120
	I0731 20:52:51.342900  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 78/120
	I0731 20:52:52.344374  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 79/120
	I0731 20:52:53.346740  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 80/120
	I0731 20:52:54.348140  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 81/120
	I0731 20:52:55.349577  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 82/120
	I0731 20:52:56.350925  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 83/120
	I0731 20:52:57.352324  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 84/120
	I0731 20:52:58.354129  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 85/120
	I0731 20:52:59.355546  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 86/120
	I0731 20:53:00.356968  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 87/120
	I0731 20:53:01.358464  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 88/120
	I0731 20:53:02.359841  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 89/120
	I0731 20:53:03.361996  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 90/120
	I0731 20:53:04.363308  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 91/120
	I0731 20:53:05.364658  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 92/120
	I0731 20:53:06.366699  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 93/120
	I0731 20:53:07.368057  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 94/120
	I0731 20:53:08.370170  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 95/120
	I0731 20:53:09.371632  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 96/120
	I0731 20:53:10.372925  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 97/120
	I0731 20:53:11.374486  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 98/120
	I0731 20:53:12.375947  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 99/120
	I0731 20:53:13.378164  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 100/120
	I0731 20:53:14.379594  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 101/120
	I0731 20:53:15.381028  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 102/120
	I0731 20:53:16.382335  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 103/120
	I0731 20:53:17.383737  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 104/120
	I0731 20:53:18.386059  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 105/120
	I0731 20:53:19.387425  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 106/120
	I0731 20:53:20.388856  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 107/120
	I0731 20:53:21.390252  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 108/120
	I0731 20:53:22.391789  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 109/120
	I0731 20:53:23.394250  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 110/120
	I0731 20:53:24.395449  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 111/120
	I0731 20:53:25.396729  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 112/120
	I0731 20:53:26.397849  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 113/120
	I0731 20:53:27.399941  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 114/120
	I0731 20:53:28.402308  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 115/120
	I0731 20:53:29.404194  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 116/120
	I0731 20:53:30.405519  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 117/120
	I0731 20:53:31.407765  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 118/120
	I0731 20:53:32.409160  187030 main.go:141] libmachine: (no-preload-916885) Waiting for machine to stop 119/120
	I0731 20:53:33.410640  187030 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0731 20:53:33.410719  187030 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0731 20:53:33.412739  187030 out.go:177] 
	W0731 20:53:33.414056  187030 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0731 20:53:33.414070  187030 out.go:239] * 
	* 
	W0731 20:53:33.416743  187030 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 20:53:33.418024  187030 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-916885 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-916885 -n no-preload-916885
E0731 20:53:33.860682  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/calico-341849/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-916885 -n no-preload-916885: exit status 3 (18.426440338s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 20:53:51.845728  187786 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.239:22: connect: no route to host
	E0731 20:53:51.845753  187786 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.239:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-916885" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (138.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (138.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-125614 --alsologtostderr -v=3
E0731 20:52:10.331291  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kindnet-341849/client.crt: no such file or directory
E0731 20:52:11.937947  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/calico-341849/client.crt: no such file or directory
E0731 20:52:11.943264  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/calico-341849/client.crt: no such file or directory
E0731 20:52:11.953541  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/calico-341849/client.crt: no such file or directory
E0731 20:52:11.973815  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/calico-341849/client.crt: no such file or directory
E0731 20:52:12.014102  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/calico-341849/client.crt: no such file or directory
E0731 20:52:12.094421  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/calico-341849/client.crt: no such file or directory
E0731 20:52:12.255153  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/calico-341849/client.crt: no such file or directory
E0731 20:52:12.575913  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/calico-341849/client.crt: no such file or directory
E0731 20:52:13.216046  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/calico-341849/client.crt: no such file or directory
E0731 20:52:14.496777  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/calico-341849/client.crt: no such file or directory
E0731 20:52:17.057504  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/calico-341849/client.crt: no such file or directory
E0731 20:52:22.178559  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/calico-341849/client.crt: no such file or directory
E0731 20:52:30.946963  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/custom-flannel-341849/client.crt: no such file or directory
E0731 20:52:30.952244  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/custom-flannel-341849/client.crt: no such file or directory
E0731 20:52:30.962533  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/custom-flannel-341849/client.crt: no such file or directory
E0731 20:52:30.982850  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/custom-flannel-341849/client.crt: no such file or directory
E0731 20:52:31.023214  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/custom-flannel-341849/client.crt: no such file or directory
E0731 20:52:31.103711  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/custom-flannel-341849/client.crt: no such file or directory
E0731 20:52:31.263803  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/custom-flannel-341849/client.crt: no such file or directory
E0731 20:52:31.584728  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/custom-flannel-341849/client.crt: no such file or directory
E0731 20:52:32.224929  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/custom-flannel-341849/client.crt: no such file or directory
E0731 20:52:32.419013  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/calico-341849/client.crt: no such file or directory
E0731 20:52:33.505989  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/custom-flannel-341849/client.crt: no such file or directory
E0731 20:52:34.578283  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
E0731 20:52:36.066406  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/custom-flannel-341849/client.crt: no such file or directory
E0731 20:52:41.187210  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/custom-flannel-341849/client.crt: no such file or directory
E0731 20:52:51.428272  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/custom-flannel-341849/client.crt: no such file or directory
E0731 20:52:52.899764  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/calico-341849/client.crt: no such file or directory
E0731 20:53:04.911792  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/auto-341849/client.crt: no such file or directory
E0731 20:53:08.295281  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/enable-default-cni-341849/client.crt: no such file or directory
E0731 20:53:08.300574  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/enable-default-cni-341849/client.crt: no such file or directory
E0731 20:53:08.311256  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/enable-default-cni-341849/client.crt: no such file or directory
E0731 20:53:08.331499  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/enable-default-cni-341849/client.crt: no such file or directory
E0731 20:53:08.372465  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/enable-default-cni-341849/client.crt: no such file or directory
E0731 20:53:08.452932  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/enable-default-cni-341849/client.crt: no such file or directory
E0731 20:53:08.613445  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/enable-default-cni-341849/client.crt: no such file or directory
E0731 20:53:08.934503  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/enable-default-cni-341849/client.crt: no such file or directory
E0731 20:53:09.575158  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/enable-default-cni-341849/client.crt: no such file or directory
E0731 20:53:10.855852  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/enable-default-cni-341849/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-125614 --alsologtostderr -v=3: exit status 82 (2m0.463021501s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-125614"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 20:51:44.426829  187192 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:51:44.426942  187192 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:51:44.426951  187192 out.go:304] Setting ErrFile to fd 2...
	I0731 20:51:44.426955  187192 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:51:44.427136  187192 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 20:51:44.427384  187192 out.go:298] Setting JSON to false
	I0731 20:51:44.427481  187192 mustload.go:65] Loading cluster: default-k8s-diff-port-125614
	I0731 20:51:44.427825  187192 config.go:182] Loaded profile config "default-k8s-diff-port-125614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:51:44.427902  187192 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/config.json ...
	I0731 20:51:44.428088  187192 mustload.go:65] Loading cluster: default-k8s-diff-port-125614
	I0731 20:51:44.428225  187192 config.go:182] Loaded profile config "default-k8s-diff-port-125614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:51:44.428283  187192 stop.go:39] StopHost: default-k8s-diff-port-125614
	I0731 20:51:44.428703  187192 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:51:44.428756  187192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:51:44.443418  187192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35319
	I0731 20:51:44.443893  187192 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:51:44.444470  187192 main.go:141] libmachine: Using API Version  1
	I0731 20:51:44.444496  187192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:51:44.444841  187192 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:51:44.447130  187192 out.go:177] * Stopping node "default-k8s-diff-port-125614"  ...
	I0731 20:51:44.448370  187192 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0731 20:51:44.448395  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:51:44.448639  187192 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0731 20:51:44.448673  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:51:44.451591  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:51:44.451996  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:51:44.452032  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:51:44.452208  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:51:44.452423  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:51:44.452623  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:51:44.452790  187192 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:51:44.543835  187192 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0731 20:51:44.590228  187192 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0731 20:51:44.650457  187192 main.go:141] libmachine: Stopping "default-k8s-diff-port-125614"...
	I0731 20:51:44.650500  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetState
	I0731 20:51:44.652181  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Stop
	I0731 20:51:44.655693  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 0/120
	I0731 20:51:45.657136  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 1/120
	I0731 20:51:46.658454  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 2/120
	I0731 20:51:47.659721  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 3/120
	I0731 20:51:48.660924  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 4/120
	I0731 20:51:49.662880  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 5/120
	I0731 20:51:50.664220  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 6/120
	I0731 20:51:51.665640  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 7/120
	I0731 20:51:52.667810  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 8/120
	I0731 20:51:53.669048  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 9/120
	I0731 20:51:54.670768  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 10/120
	I0731 20:51:55.671935  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 11/120
	I0731 20:51:56.673591  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 12/120
	I0731 20:51:57.674757  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 13/120
	I0731 20:51:58.676127  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 14/120
	I0731 20:51:59.678339  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 15/120
	I0731 20:52:00.679739  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 16/120
	I0731 20:52:01.680963  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 17/120
	I0731 20:52:02.682200  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 18/120
	I0731 20:52:03.683792  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 19/120
	I0731 20:52:04.686055  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 20/120
	I0731 20:52:05.687343  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 21/120
	I0731 20:52:06.688873  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 22/120
	I0731 20:52:07.690125  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 23/120
	I0731 20:52:08.691497  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 24/120
	I0731 20:52:09.693598  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 25/120
	I0731 20:52:10.694837  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 26/120
	I0731 20:52:11.696211  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 27/120
	I0731 20:52:12.697505  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 28/120
	I0731 20:52:13.699091  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 29/120
	I0731 20:52:14.701384  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 30/120
	I0731 20:52:15.702694  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 31/120
	I0731 20:52:16.703888  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 32/120
	I0731 20:52:17.705014  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 33/120
	I0731 20:52:18.706463  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 34/120
	I0731 20:52:19.708510  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 35/120
	I0731 20:52:20.709822  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 36/120
	I0731 20:52:21.711355  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 37/120
	I0731 20:52:22.712599  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 38/120
	I0731 20:52:23.714101  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 39/120
	I0731 20:52:24.716366  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 40/120
	I0731 20:52:25.717785  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 41/120
	I0731 20:52:26.719226  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 42/120
	I0731 20:52:27.720525  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 43/120
	I0731 20:52:28.721993  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 44/120
	I0731 20:52:29.723958  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 45/120
	I0731 20:52:30.725431  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 46/120
	I0731 20:52:31.726717  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 47/120
	I0731 20:52:32.727955  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 48/120
	I0731 20:52:33.729458  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 49/120
	I0731 20:52:34.731571  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 50/120
	I0731 20:52:35.732844  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 51/120
	I0731 20:52:36.734117  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 52/120
	I0731 20:52:37.735417  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 53/120
	I0731 20:52:38.736727  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 54/120
	I0731 20:52:39.738808  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 55/120
	I0731 20:52:40.740063  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 56/120
	I0731 20:52:41.741605  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 57/120
	I0731 20:52:42.742900  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 58/120
	I0731 20:52:43.744301  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 59/120
	I0731 20:52:44.746619  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 60/120
	I0731 20:52:45.747928  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 61/120
	I0731 20:52:46.749293  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 62/120
	I0731 20:52:47.750720  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 63/120
	I0731 20:52:48.752004  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 64/120
	I0731 20:52:49.754072  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 65/120
	I0731 20:52:50.755452  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 66/120
	I0731 20:52:51.756897  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 67/120
	I0731 20:52:52.758280  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 68/120
	I0731 20:52:53.759545  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 69/120
	I0731 20:52:54.761865  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 70/120
	I0731 20:52:55.763144  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 71/120
	I0731 20:52:56.764550  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 72/120
	I0731 20:52:57.765804  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 73/120
	I0731 20:52:58.767186  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 74/120
	I0731 20:52:59.769251  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 75/120
	I0731 20:53:00.770662  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 76/120
	I0731 20:53:01.772068  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 77/120
	I0731 20:53:02.773434  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 78/120
	I0731 20:53:03.775143  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 79/120
	I0731 20:53:04.777439  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 80/120
	I0731 20:53:05.778710  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 81/120
	I0731 20:53:06.780200  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 82/120
	I0731 20:53:07.781503  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 83/120
	I0731 20:53:08.783076  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 84/120
	I0731 20:53:09.785180  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 85/120
	I0731 20:53:10.786507  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 86/120
	I0731 20:53:11.787886  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 87/120
	I0731 20:53:12.789583  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 88/120
	I0731 20:53:13.791057  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 89/120
	I0731 20:53:14.793619  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 90/120
	I0731 20:53:15.795005  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 91/120
	I0731 20:53:16.796423  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 92/120
	I0731 20:53:17.797876  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 93/120
	I0731 20:53:18.799234  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 94/120
	I0731 20:53:19.801301  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 95/120
	I0731 20:53:20.802824  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 96/120
	I0731 20:53:21.804218  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 97/120
	I0731 20:53:22.805787  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 98/120
	I0731 20:53:23.807700  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 99/120
	I0731 20:53:24.809717  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 100/120
	I0731 20:53:25.811031  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 101/120
	I0731 20:53:26.812302  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 102/120
	I0731 20:53:27.814076  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 103/120
	I0731 20:53:28.815583  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 104/120
	I0731 20:53:29.817063  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 105/120
	I0731 20:53:30.818491  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 106/120
	I0731 20:53:31.819762  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 107/120
	I0731 20:53:32.821123  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 108/120
	I0731 20:53:33.822373  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 109/120
	I0731 20:53:34.824612  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 110/120
	I0731 20:53:35.825933  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 111/120
	I0731 20:53:36.827253  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 112/120
	I0731 20:53:37.828567  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 113/120
	I0731 20:53:38.830055  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 114/120
	I0731 20:53:39.832130  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 115/120
	I0731 20:53:40.834019  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 116/120
	I0731 20:53:41.835260  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 117/120
	I0731 20:53:42.836600  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 118/120
	I0731 20:53:43.837988  187192 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for machine to stop 119/120
	I0731 20:53:44.838523  187192 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0731 20:53:44.838600  187192 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0731 20:53:44.840670  187192 out.go:177] 
	W0731 20:53:44.841924  187192 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0731 20:53:44.841940  187192 out.go:239] * 
	* 
	W0731 20:53:44.844596  187192 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 20:53:44.845980  187192 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-125614 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-125614 -n default-k8s-diff-port-125614
E0731 20:53:49.259061  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/enable-default-cni-341849/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-125614 -n default-k8s-diff-port-125614: exit status 3 (18.518501906s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 20:54:03.365755  187927 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.221:22: connect: no route to host
	E0731 20:54:03.365778  187927 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.221:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-125614" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (138.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-239115 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-239115 create -f testdata/busybox.yaml: exit status 1 (42.756288ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-239115" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-239115 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-239115 -n old-k8s-version-239115
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-239115 -n old-k8s-version-239115: exit status 6 (214.292794ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 20:53:24.214498  187616 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-239115" does not appear in /home/jenkins/minikube-integration/19355-121704/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-239115" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-239115 -n old-k8s-version-239115
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-239115 -n old-k8s-version-239115: exit status 6 (213.611534ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 20:53:24.428462  187646 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-239115" does not appear in /home/jenkins/minikube-integration/19355-121704/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-239115" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (107.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-239115 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0731 20:53:28.778116  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/enable-default-cni-341849/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-239115 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m47.555273476s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-239115 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-239115 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-239115 describe deploy/metrics-server -n kube-system: exit status 1 (44.64514ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-239115" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-239115 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-239115 -n old-k8s-version-239115
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-239115 -n old-k8s-version-239115: exit status 6 (223.178631ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 20:55:12.249840  188526 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-239115" does not appear in /home/jenkins/minikube-integration/19355-121704/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-239115" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (107.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-831240 -n embed-certs-831240
E0731 20:53:32.251677  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kindnet-341849/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-831240 -n embed-certs-831240: exit status 3 (3.163943079s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 20:53:32.997682  187722 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.92:22: connect: no route to host
	E0731 20:53:32.997702  187722 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.92:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-831240 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-831240 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152770641s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.92:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-831240 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-831240 -n embed-certs-831240
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-831240 -n embed-certs-831240: exit status 3 (3.062976475s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 20:53:42.213776  187832 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.92:22: connect: no route to host
	E0731 20:53:42.213796  187832 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.92:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-831240" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-916885 -n no-preload-916885
E0731 20:53:52.869201  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/custom-flannel-341849/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-916885 -n no-preload-916885: exit status 3 (3.167857947s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 20:53:55.013693  187973 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.239:22: connect: no route to host
	E0731 20:53:55.013714  187973 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.239:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-916885 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-916885 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152388633s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.239:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-916885 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-916885 -n no-preload-916885
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-916885 -n no-preload-916885: exit status 3 (3.06316787s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 20:54:04.229820  188055 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.239:22: connect: no route to host
	E0731 20:54:04.229843  188055 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.239:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-916885" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-125614 -n default-k8s-diff-port-125614
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-125614 -n default-k8s-diff-port-125614: exit status 3 (3.167911055s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 20:54:06.533739  188102 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.221:22: connect: no route to host
	E0731 20:54:06.533759  188102 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.221:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-125614 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-125614 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.157374492s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.221:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-125614 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-125614 -n default-k8s-diff-port-125614
E0731 20:54:13.435989  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/flannel-341849/client.crt: no such file or directory
E0731 20:54:13.441223  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/flannel-341849/client.crt: no such file or directory
E0731 20:54:13.451463  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/flannel-341849/client.crt: no such file or directory
E0731 20:54:13.471709  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/flannel-341849/client.crt: no such file or directory
E0731 20:54:13.511978  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/flannel-341849/client.crt: no such file or directory
E0731 20:54:13.592335  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/flannel-341849/client.crt: no such file or directory
E0731 20:54:13.752775  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/flannel-341849/client.crt: no such file or directory
E0731 20:54:14.073410  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/flannel-341849/client.crt: no such file or directory
E0731 20:54:14.714527  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/flannel-341849/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-125614 -n default-k8s-diff-port-125614: exit status 3 (3.058489978s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 20:54:15.749702  188214 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.221:22: connect: no route to host
	E0731 20:54:15.749729  188214 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.221:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-125614" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (740.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-239115 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0731 20:55:14.789613  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/custom-flannel-341849/client.crt: no such file or directory
E0731 20:55:21.067563  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/auto-341849/client.crt: no such file or directory
E0731 20:55:35.357787  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/flannel-341849/client.crt: no such file or directory
E0731 20:55:48.407391  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kindnet-341849/client.crt: no such file or directory
E0731 20:55:48.543616  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/bridge-341849/client.crt: no such file or directory
E0731 20:55:48.752011  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/auto-341849/client.crt: no such file or directory
E0731 20:55:52.140780  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/enable-default-cni-341849/client.crt: no such file or directory
E0731 20:56:16.092481  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kindnet-341849/client.crt: no such file or directory
E0731 20:56:32.872264  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
E0731 20:56:57.278156  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/flannel-341849/client.crt: no such file or directory
E0731 20:57:10.463906  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/bridge-341849/client.crt: no such file or directory
E0731 20:57:11.938388  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/calico-341849/client.crt: no such file or directory
E0731 20:57:30.947828  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/custom-flannel-341849/client.crt: no such file or directory
E0731 20:57:34.577669  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
E0731 20:57:39.621886  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/calico-341849/client.crt: no such file or directory
E0731 20:57:58.630062  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/custom-flannel-341849/client.crt: no such file or directory
E0731 20:58:08.294869  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/enable-default-cni-341849/client.crt: no such file or directory
E0731 20:58:35.981748  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/enable-default-cni-341849/client.crt: no such file or directory
E0731 20:59:13.435265  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/flannel-341849/client.crt: no such file or directory
E0731 20:59:26.619563  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/bridge-341849/client.crt: no such file or directory
E0731 20:59:41.119082  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/flannel-341849/client.crt: no such file or directory
E0731 20:59:54.304962  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/bridge-341849/client.crt: no such file or directory
E0731 21:00:09.824553  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
E0731 21:00:21.067436  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/auto-341849/client.crt: no such file or directory
E0731 21:00:48.407202  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kindnet-341849/client.crt: no such file or directory
E0731 21:02:11.938382  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/calico-341849/client.crt: no such file or directory
E0731 21:02:30.947347  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/custom-flannel-341849/client.crt: no such file or directory
E0731 21:02:34.578037  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
E0731 21:03:08.295734  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/enable-default-cni-341849/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-239115 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m16.609320891s)

                                                
                                                
-- stdout --
	* [old-k8s-version-239115] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19355
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-239115" primary control-plane node in "old-k8s-version-239115" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-239115" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 20:55:13.835355  188656 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:55:13.835514  188656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:55:13.835525  188656 out.go:304] Setting ErrFile to fd 2...
	I0731 20:55:13.835531  188656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:55:13.835717  188656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 20:55:13.836233  188656 out.go:298] Setting JSON to false
	I0731 20:55:13.837146  188656 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9450,"bootTime":1722449864,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 20:55:13.837206  188656 start.go:139] virtualization: kvm guest
	I0731 20:55:13.839094  188656 out.go:177] * [old-k8s-version-239115] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 20:55:13.840630  188656 notify.go:220] Checking for updates...
	I0731 20:55:13.840638  188656 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 20:55:13.841884  188656 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 20:55:13.843054  188656 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 20:55:13.844295  188656 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 20:55:13.845348  188656 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 20:55:13.846480  188656 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 20:55:13.847974  188656 config.go:182] Loaded profile config "old-k8s-version-239115": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 20:55:13.848349  188656 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:55:13.848390  188656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:55:13.863017  188656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46065
	I0731 20:55:13.863418  188656 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:55:13.863927  188656 main.go:141] libmachine: Using API Version  1
	I0731 20:55:13.863980  188656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:55:13.864357  188656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:55:13.864625  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:55:13.866178  188656 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 20:55:13.867248  188656 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 20:55:13.867523  188656 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:55:13.867552  188656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:55:13.881922  188656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44705
	I0731 20:55:13.882304  188656 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:55:13.882707  188656 main.go:141] libmachine: Using API Version  1
	I0731 20:55:13.882729  188656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:55:13.883037  188656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:55:13.883214  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:55:13.917067  188656 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 20:55:13.918247  188656 start.go:297] selected driver: kvm2
	I0731 20:55:13.918260  188656 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-239115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:55:13.918396  188656 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 20:55:13.919323  188656 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:55:13.919428  188656 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-121704/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 20:55:13.934150  188656 install.go:137] /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0731 20:55:13.934506  188656 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 20:55:13.934569  188656 cni.go:84] Creating CNI manager for ""
	I0731 20:55:13.934583  188656 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:55:13.934630  188656 start.go:340] cluster config:
	{Name:old-k8s-version-239115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239115 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:55:13.934737  188656 iso.go:125] acquiring lock: {Name:mk69fc0fe37180b19ce91307b5e12ab2d0bd69fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:55:13.936401  188656 out.go:177] * Starting "old-k8s-version-239115" primary control-plane node in "old-k8s-version-239115" cluster
	I0731 20:55:13.937700  188656 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 20:55:13.937735  188656 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0731 20:55:13.937743  188656 cache.go:56] Caching tarball of preloaded images
	I0731 20:55:13.937806  188656 preload.go:172] Found /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 20:55:13.937816  188656 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0731 20:55:13.937907  188656 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/config.json ...
	I0731 20:55:13.938068  188656 start.go:360] acquireMachinesLock for old-k8s-version-239115: {Name:mk0ee20c9dba367bb5e62f6affdfe6f589095d2a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 20:59:00.418227  188656 start.go:364] duration metric: took 3m46.480116699s to acquireMachinesLock for "old-k8s-version-239115"
	I0731 20:59:00.418294  188656 start.go:96] Skipping create...Using existing machine configuration
	I0731 20:59:00.418302  188656 fix.go:54] fixHost starting: 
	I0731 20:59:00.418738  188656 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:00.418773  188656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:00.438533  188656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37127
	I0731 20:59:00.438963  188656 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:00.439499  188656 main.go:141] libmachine: Using API Version  1
	I0731 20:59:00.439524  188656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:00.439930  188656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:00.441449  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:00.441651  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetState
	I0731 20:59:00.443465  188656 fix.go:112] recreateIfNeeded on old-k8s-version-239115: state=Stopped err=<nil>
	I0731 20:59:00.443505  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	W0731 20:59:00.443679  188656 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 20:59:00.445840  188656 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-239115" ...
	I0731 20:59:00.447208  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .Start
	I0731 20:59:00.447389  188656 main.go:141] libmachine: (old-k8s-version-239115) Ensuring networks are active...
	I0731 20:59:00.448116  188656 main.go:141] libmachine: (old-k8s-version-239115) Ensuring network default is active
	I0731 20:59:00.448589  188656 main.go:141] libmachine: (old-k8s-version-239115) Ensuring network mk-old-k8s-version-239115 is active
	I0731 20:59:00.448892  188656 main.go:141] libmachine: (old-k8s-version-239115) Getting domain xml...
	I0731 20:59:00.450110  188656 main.go:141] libmachine: (old-k8s-version-239115) Creating domain...
	I0731 20:59:01.823554  188656 main.go:141] libmachine: (old-k8s-version-239115) Waiting to get IP...
	I0731 20:59:01.824648  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:01.825111  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:01.825172  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:01.825080  189574 retry.go:31] will retry after 241.700507ms: waiting for machine to come up
	I0731 20:59:02.068913  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:02.069608  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:02.069738  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:02.069663  189574 retry.go:31] will retry after 258.921821ms: waiting for machine to come up
	I0731 20:59:02.330231  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:02.330846  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:02.330877  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:02.330776  189574 retry.go:31] will retry after 460.911793ms: waiting for machine to come up
	I0731 20:59:02.793718  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:02.794177  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:02.794206  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:02.794156  189574 retry.go:31] will retry after 380.241989ms: waiting for machine to come up
	I0731 20:59:03.175918  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:03.176761  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:03.176786  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:03.176670  189574 retry.go:31] will retry after 631.876736ms: waiting for machine to come up
	I0731 20:59:03.810803  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:03.811478  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:03.811503  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:03.811366  189574 retry.go:31] will retry after 583.328017ms: waiting for machine to come up
	I0731 20:59:04.395886  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:04.396400  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:04.396664  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:04.396347  189574 retry.go:31] will retry after 1.154504022s: waiting for machine to come up
	I0731 20:59:05.552240  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:05.552879  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:05.552901  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:05.552831  189574 retry.go:31] will retry after 1.037365333s: waiting for machine to come up
	I0731 20:59:06.591875  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:06.592416  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:06.592450  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:06.592329  189574 retry.go:31] will retry after 1.249444079s: waiting for machine to come up
	I0731 20:59:07.843058  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:07.843436  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:07.843463  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:07.843370  189574 retry.go:31] will retry after 1.700521776s: waiting for machine to come up
	I0731 20:59:09.545937  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:09.546581  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:09.546605  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:09.546529  189574 retry.go:31] will retry after 1.934269586s: waiting for machine to come up
	I0731 20:59:11.482402  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:11.482794  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:11.482823  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:11.482744  189574 retry.go:31] will retry after 2.575131422s: waiting for machine to come up
	I0731 20:59:14.059385  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:14.059857  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:14.059879  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:14.059819  189574 retry.go:31] will retry after 3.127857327s: waiting for machine to come up
	I0731 20:59:17.189405  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:17.189871  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:17.189902  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:17.189821  189574 retry.go:31] will retry after 4.516767425s: waiting for machine to come up
	I0731 20:59:21.708296  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.708811  188656 main.go:141] libmachine: (old-k8s-version-239115) Found IP for machine: 192.168.61.51
	I0731 20:59:21.708846  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has current primary IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.708860  188656 main.go:141] libmachine: (old-k8s-version-239115) Reserving static IP address...
	I0731 20:59:21.709432  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "old-k8s-version-239115", mac: "52:54:00:5a:70:0d", ip: "192.168.61.51"} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.709663  188656 main.go:141] libmachine: (old-k8s-version-239115) Reserved static IP address: 192.168.61.51
	I0731 20:59:21.709695  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | skip adding static IP to network mk-old-k8s-version-239115 - found existing host DHCP lease matching {name: "old-k8s-version-239115", mac: "52:54:00:5a:70:0d", ip: "192.168.61.51"}
	I0731 20:59:21.709711  188656 main.go:141] libmachine: (old-k8s-version-239115) Waiting for SSH to be available...
	I0731 20:59:21.709723  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | Getting to WaitForSSH function...
	I0731 20:59:21.711911  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.712310  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.712345  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.712517  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | Using SSH client type: external
	I0731 20:59:21.712540  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa (-rw-------)
	I0731 20:59:21.712581  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:59:21.712598  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | About to run SSH command:
	I0731 20:59:21.712625  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | exit 0
	I0731 20:59:21.838026  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | SSH cmd err, output: <nil>: 
	I0731 20:59:21.838370  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetConfigRaw
	I0731 20:59:21.839169  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetIP
	I0731 20:59:21.842168  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.842588  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.842623  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.842866  188656 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/config.json ...
	I0731 20:59:21.843126  188656 machine.go:94] provisionDockerMachine start ...
	I0731 20:59:21.843150  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:21.843388  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:21.846148  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.846657  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.846686  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.846993  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:21.847165  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:21.847360  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:21.847530  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:21.847707  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:21.847938  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:21.847951  188656 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 20:59:21.955109  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 20:59:21.955143  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetMachineName
	I0731 20:59:21.955460  188656 buildroot.go:166] provisioning hostname "old-k8s-version-239115"
	I0731 20:59:21.955492  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetMachineName
	I0731 20:59:21.955728  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:21.958752  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.959146  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.959176  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.959395  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:21.959620  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:21.959781  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:21.959918  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:21.960078  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:21.960358  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:21.960378  188656 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-239115 && echo "old-k8s-version-239115" | sudo tee /etc/hostname
	I0731 20:59:22.090625  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-239115
	
	I0731 20:59:22.090665  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.093927  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.094356  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.094387  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.094729  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.094942  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.095153  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.095364  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.095583  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:22.095819  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:22.095845  188656 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-239115' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-239115/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-239115' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:59:22.217153  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:59:22.217189  188656 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 20:59:22.217215  188656 buildroot.go:174] setting up certificates
	I0731 20:59:22.217229  188656 provision.go:84] configureAuth start
	I0731 20:59:22.217242  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetMachineName
	I0731 20:59:22.217613  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetIP
	I0731 20:59:22.220640  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.221082  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.221125  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.221237  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.223811  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.224152  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.224180  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.224337  188656 provision.go:143] copyHostCerts
	I0731 20:59:22.224405  188656 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 20:59:22.224418  188656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 20:59:22.224485  188656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 20:59:22.224604  188656 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 20:59:22.224616  188656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 20:59:22.224654  188656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 20:59:22.224729  188656 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 20:59:22.224740  188656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 20:59:22.224766  188656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 20:59:22.224833  188656 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-239115 san=[127.0.0.1 192.168.61.51 localhost minikube old-k8s-version-239115]
	I0731 20:59:22.407532  188656 provision.go:177] copyRemoteCerts
	I0731 20:59:22.407599  188656 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:59:22.407625  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.410594  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.411007  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.411033  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.411338  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.411582  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.411811  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.412007  188656 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa Username:docker}
	I0731 20:59:22.492781  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:59:22.518278  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0731 20:59:22.543018  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 20:59:22.568888  188656 provision.go:87] duration metric: took 351.643ms to configureAuth
	I0731 20:59:22.568920  188656 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:59:22.569099  188656 config.go:182] Loaded profile config "old-k8s-version-239115": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 20:59:22.569169  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.572154  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.572471  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.572500  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.572669  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.572872  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.572993  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.573112  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.573249  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:22.573481  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:22.573512  188656 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:59:22.847156  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:59:22.847193  188656 machine.go:97] duration metric: took 1.004049055s to provisionDockerMachine
	I0731 20:59:22.847211  188656 start.go:293] postStartSetup for "old-k8s-version-239115" (driver="kvm2")
	I0731 20:59:22.847229  188656 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:59:22.847284  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:22.847710  188656 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:59:22.847741  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.850515  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.850935  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.850962  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.851088  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.851288  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.851524  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.851674  188656 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa Username:docker}
	I0731 20:59:22.932316  188656 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:59:22.936672  188656 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:59:22.936707  188656 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 20:59:22.936792  188656 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 20:59:22.936894  188656 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 20:59:22.937011  188656 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:59:22.946454  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:22.972952  188656 start.go:296] duration metric: took 125.72216ms for postStartSetup
	I0731 20:59:22.972996  188656 fix.go:56] duration metric: took 22.554695114s for fixHost
	I0731 20:59:22.973026  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.975758  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.976166  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.976198  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.976320  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.976585  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.976782  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.976966  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.977115  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:22.977275  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:22.977284  188656 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0731 20:59:23.082657  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722459563.026856067
	
	I0731 20:59:23.082683  188656 fix.go:216] guest clock: 1722459563.026856067
	I0731 20:59:23.082694  188656 fix.go:229] Guest: 2024-07-31 20:59:23.026856067 +0000 UTC Remote: 2024-07-31 20:59:22.973000729 +0000 UTC m=+249.171273714 (delta=53.855338ms)
	I0731 20:59:23.082721  188656 fix.go:200] guest clock delta is within tolerance: 53.855338ms
	I0731 20:59:23.082727  188656 start.go:83] releasing machines lock for "old-k8s-version-239115", held for 22.664459101s
	I0731 20:59:23.082752  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:23.083052  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetIP
	I0731 20:59:23.086626  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.087093  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:23.087135  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.087366  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:23.087954  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:23.088159  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:23.088251  188656 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:59:23.088303  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:23.088370  188656 ssh_runner.go:195] Run: cat /version.json
	I0731 20:59:23.088392  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:23.091710  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.091989  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.092073  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:23.092101  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.092227  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:23.092429  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:23.092472  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:23.092520  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.092618  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:23.092752  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:23.092803  188656 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa Username:docker}
	I0731 20:59:23.092931  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:23.093100  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:23.093255  188656 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa Username:docker}
	I0731 20:59:23.175012  188656 ssh_runner.go:195] Run: systemctl --version
	I0731 20:59:23.200192  188656 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:59:23.348227  188656 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:59:23.355109  188656 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:59:23.355195  188656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:59:23.371683  188656 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:59:23.371707  188656 start.go:495] detecting cgroup driver to use...
	I0731 20:59:23.371786  188656 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:59:23.388727  188656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:59:23.408830  188656 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:59:23.408907  188656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:59:23.423594  188656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:59:23.437876  188656 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:59:23.559105  188656 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:59:23.743186  188656 docker.go:233] disabling docker service ...
	I0731 20:59:23.743253  188656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:59:23.758053  188656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:59:23.779951  188656 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:59:23.919494  188656 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:59:24.057230  188656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:59:24.072687  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:59:24.094528  188656 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 20:59:24.094600  188656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:24.106579  188656 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:59:24.106634  188656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:24.120079  188656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:24.130759  188656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:24.142925  188656 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:59:24.154760  188656 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:59:24.165059  188656 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:59:24.165113  188656 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:59:24.179567  188656 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:59:24.191838  188656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:24.339078  188656 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:59:24.515723  188656 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:59:24.515810  188656 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:59:24.521882  188656 start.go:563] Will wait 60s for crictl version
	I0731 20:59:24.521966  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:24.527655  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:59:24.581055  188656 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:59:24.581151  188656 ssh_runner.go:195] Run: crio --version
	I0731 20:59:24.623207  188656 ssh_runner.go:195] Run: crio --version
	I0731 20:59:24.662956  188656 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0731 20:59:24.664851  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetIP
	I0731 20:59:24.668464  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:24.668842  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:24.668869  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:24.669103  188656 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0731 20:59:24.674448  188656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:24.690857  188656 kubeadm.go:883] updating cluster {Name:old-k8s-version-239115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-239115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:59:24.691011  188656 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 20:59:24.691056  188656 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:24.744259  188656 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 20:59:24.744348  188656 ssh_runner.go:195] Run: which lz4
	I0731 20:59:24.749358  188656 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0731 20:59:24.754299  188656 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 20:59:24.754341  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0731 20:59:26.551495  188656 crio.go:462] duration metric: took 1.802206904s to copy over tarball
	I0731 20:59:26.551571  188656 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 20:59:29.653941  188656 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.102337952s)
	I0731 20:59:29.653974  188656 crio.go:469] duration metric: took 3.102444338s to extract the tarball
	I0731 20:59:29.653982  188656 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 20:59:29.704065  188656 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:29.745966  188656 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 20:59:29.746010  188656 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 20:59:29.746076  188656 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:59:29.746107  188656 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:29.746129  188656 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:29.746149  188656 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0731 20:59:29.746170  188656 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0731 20:59:29.746410  188656 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:29.746423  188656 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:29.746735  188656 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:29.747951  188656 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 20:59:29.747978  188656 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0731 20:59:29.747978  188656 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:29.747998  188656 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:29.748005  188656 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:29.747951  188656 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:59:29.748021  188656 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:29.748091  188656 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:29.915865  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:29.918049  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:29.950840  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:29.952762  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:29.956317  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:29.959905  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0731 20:59:30.000707  188656 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0731 20:59:30.000768  188656 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:30.000821  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.007207  188656 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0731 20:59:30.007251  188656 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:30.007294  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.016613  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0731 20:59:30.082306  188656 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0731 20:59:30.082358  188656 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:30.082364  188656 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0731 20:59:30.082414  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.082418  188656 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:30.082557  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.089299  188656 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0731 20:59:30.089382  188656 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:30.089427  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.105150  188656 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0731 20:59:30.105217  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:30.105246  188656 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0731 20:59:30.105264  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:30.105282  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.129702  188656 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0731 20:59:30.129748  188656 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0731 20:59:30.129779  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:30.129826  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:30.129853  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:30.129800  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.188192  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 20:59:30.188243  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0731 20:59:30.188342  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0731 20:59:30.188365  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0731 20:59:30.268231  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0731 20:59:30.268296  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0731 20:59:30.268337  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0731 20:59:30.287822  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0731 20:59:30.287929  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0731 20:59:30.635440  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:59:30.776879  188656 cache_images.go:92] duration metric: took 1.030849977s to LoadCachedImages
	W0731 20:59:30.777006  188656 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0731 20:59:30.777028  188656 kubeadm.go:934] updating node { 192.168.61.51 8443 v1.20.0 crio true true} ...
	I0731 20:59:30.777175  188656 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-239115 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:59:30.777284  188656 ssh_runner.go:195] Run: crio config
	I0731 20:59:30.832542  188656 cni.go:84] Creating CNI manager for ""
	I0731 20:59:30.832570  188656 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:30.832586  188656 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:59:30.832618  188656 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-239115 NodeName:old-k8s-version-239115 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 20:59:30.832798  188656 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-239115"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:59:30.832877  188656 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0731 20:59:30.842909  188656 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:59:30.842995  188656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 20:59:30.852951  188656 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0731 20:59:30.872643  188656 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:59:30.889851  188656 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0731 20:59:30.910958  188656 ssh_runner.go:195] Run: grep 192.168.61.51	control-plane.minikube.internal$ /etc/hosts
	I0731 20:59:30.915645  188656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:30.928698  188656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:31.055628  188656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:59:31.076731  188656 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115 for IP: 192.168.61.51
	I0731 20:59:31.076759  188656 certs.go:194] generating shared ca certs ...
	I0731 20:59:31.076789  188656 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:31.076979  188656 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 20:59:31.077041  188656 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 20:59:31.077057  188656 certs.go:256] generating profile certs ...
	I0731 20:59:31.077175  188656 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/client.key
	I0731 20:59:31.077378  188656 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/apiserver.key.072d7f83
	I0731 20:59:31.077514  188656 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/proxy-client.key
	I0731 20:59:31.077704  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 20:59:31.077789  188656 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 20:59:31.077806  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 20:59:31.077854  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:59:31.077892  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:59:31.077932  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 20:59:31.077997  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:31.078906  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:59:31.126980  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:59:31.167327  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:59:31.211947  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 20:59:31.258307  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 20:59:31.296628  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 20:59:31.342330  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:59:31.391114  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 20:59:31.415097  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 20:59:31.442595  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 20:59:31.472160  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:59:31.497814  188656 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:59:31.515890  188656 ssh_runner.go:195] Run: openssl version
	I0731 20:59:31.523423  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:59:31.537984  188656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:31.544161  188656 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:31.544225  188656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:31.552590  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:59:31.567190  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 20:59:31.581206  188656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 20:59:31.586903  188656 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 20:59:31.586966  188656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 20:59:31.593485  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 20:59:31.606764  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 20:59:31.619748  188656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 20:59:31.624599  188656 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 20:59:31.624681  188656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 20:59:31.631293  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:59:31.642823  188656 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:59:31.647273  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 20:59:31.653142  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 20:59:31.659046  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 20:59:31.665552  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 20:59:31.671454  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 20:59:31.677426  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 20:59:31.683490  188656 kubeadm.go:392] StartCluster: {Name:old-k8s-version-239115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-239115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:59:31.683586  188656 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:59:31.683625  188656 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:31.725466  188656 cri.go:89] found id: ""
	I0731 20:59:31.725548  188656 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 20:59:31.737025  188656 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 20:59:31.737050  188656 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 20:59:31.737113  188656 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 20:59:31.747325  188656 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:59:31.748325  188656 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-239115" does not appear in /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 20:59:31.748965  188656 kubeconfig.go:62] /home/jenkins/minikube-integration/19355-121704/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-239115" cluster setting kubeconfig missing "old-k8s-version-239115" context setting]
	I0731 20:59:31.749997  188656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:31.757569  188656 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 20:59:31.771188  188656 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.51
	I0731 20:59:31.771222  188656 kubeadm.go:1160] stopping kube-system containers ...
	I0731 20:59:31.771236  188656 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 20:59:31.771292  188656 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:31.811574  188656 cri.go:89] found id: ""
	I0731 20:59:31.811653  188656 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 20:59:31.829930  188656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:59:31.840145  188656 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:59:31.840165  188656 kubeadm.go:157] found existing configuration files:
	
	I0731 20:59:31.840206  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 20:59:31.851266  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:59:31.851340  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:59:31.861634  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 20:59:31.871532  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:59:31.871605  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:59:31.882164  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 20:59:31.892222  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:59:31.892291  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:59:31.903299  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 20:59:31.916163  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:59:31.916235  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:59:31.929423  188656 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 20:59:31.942668  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:32.107220  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:32.953249  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:33.207806  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:33.307640  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:33.410338  188656 api_server.go:52] waiting for apiserver process to appear ...
	I0731 20:59:33.410444  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:33.910958  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:34.411011  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:34.911110  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:35.410715  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:35.911117  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:36.410825  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:36.911311  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:37.410757  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:37.910786  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:38.410821  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:38.910891  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:39.411547  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:39.911260  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:40.411404  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:40.910719  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:41.411449  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:41.910643  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:42.410967  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:42.910703  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:43.411187  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:43.910997  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:44.410783  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:44.911365  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:45.410690  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:45.911150  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:46.411384  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:46.910579  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:47.411171  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:47.910578  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:48.411377  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:48.910784  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:49.411137  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:49.911453  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:50.411128  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:50.911431  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:51.410483  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:51.910975  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:52.411519  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:52.911079  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:53.410802  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:53.911405  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:54.410870  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:54.911330  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:55.411491  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:55.911380  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:56.411483  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:56.910602  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:57.411228  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:57.910486  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:58.411198  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:58.910774  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:59.410697  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:59.911233  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:00.411170  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:00.911416  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:01.410979  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:01.911444  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:02.411537  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:02.911216  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:03.411386  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:03.910942  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:04.411505  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:04.911485  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:05.410763  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:05.910937  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:06.411216  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:06.910743  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:07.410941  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:07.910922  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:08.410593  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:08.910788  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:09.410807  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:09.911286  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:10.411372  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:10.910748  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:11.411253  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:11.910807  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:12.411208  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:12.910887  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:13.411318  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:13.910943  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:14.410728  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:14.911343  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:15.410545  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:15.910560  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:16.411117  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:16.910537  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:17.410761  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:17.910796  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:18.411138  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:18.911394  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:19.411098  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:19.910629  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:20.410698  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:20.910760  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:21.410503  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:21.910582  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:22.410724  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:22.910792  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:23.410961  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:23.910510  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:24.410725  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:24.910807  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:25.411543  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:25.911473  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:26.410494  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:26.910519  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:27.410950  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:27.911528  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:28.411350  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:28.911371  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:29.411269  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:29.911465  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:30.410633  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:30.911166  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:31.411184  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:31.910806  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:32.410806  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:32.911125  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:33.410942  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:33.411021  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:33.461204  188656 cri.go:89] found id: ""
	I0731 21:00:33.461232  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.461241  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:33.461249  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:33.461313  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:33.500898  188656 cri.go:89] found id: ""
	I0731 21:00:33.500927  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.500937  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:33.500944  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:33.501010  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:33.536865  188656 cri.go:89] found id: ""
	I0731 21:00:33.536889  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.536902  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:33.536908  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:33.536957  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:33.578540  188656 cri.go:89] found id: ""
	I0731 21:00:33.578570  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.578582  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:33.578595  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:33.578686  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:33.616242  188656 cri.go:89] found id: ""
	I0731 21:00:33.616266  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.616276  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:33.616283  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:33.616345  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:33.650436  188656 cri.go:89] found id: ""
	I0731 21:00:33.650468  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.650479  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:33.650487  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:33.650552  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:33.687256  188656 cri.go:89] found id: ""
	I0731 21:00:33.687288  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.687300  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:33.687308  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:33.687365  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:33.720381  188656 cri.go:89] found id: ""
	I0731 21:00:33.720428  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.720440  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:33.720453  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:33.720469  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:33.772182  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:33.772226  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:33.787323  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:33.787359  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:33.907858  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:33.907878  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:33.907892  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:33.974118  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:33.974157  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:36.513427  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:36.527531  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:36.527588  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:36.567679  188656 cri.go:89] found id: ""
	I0731 21:00:36.567706  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.567714  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:36.567726  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:36.567786  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:36.608106  188656 cri.go:89] found id: ""
	I0731 21:00:36.608134  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.608145  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:36.608153  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:36.608215  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:36.651783  188656 cri.go:89] found id: ""
	I0731 21:00:36.651815  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.651824  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:36.651830  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:36.651892  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:36.686716  188656 cri.go:89] found id: ""
	I0731 21:00:36.686743  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.686751  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:36.686758  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:36.686823  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:36.721823  188656 cri.go:89] found id: ""
	I0731 21:00:36.721857  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.721865  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:36.721871  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:36.721939  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:36.758060  188656 cri.go:89] found id: ""
	I0731 21:00:36.758093  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.758103  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:36.758112  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:36.758173  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:36.801667  188656 cri.go:89] found id: ""
	I0731 21:00:36.801694  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.801704  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:36.801712  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:36.801776  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:36.845084  188656 cri.go:89] found id: ""
	I0731 21:00:36.845113  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.845124  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:36.845137  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:36.845152  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:36.897208  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:36.897248  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:36.910716  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:36.910750  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:36.987259  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:36.987285  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:36.987304  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:37.061109  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:37.061144  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:39.600847  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:39.615897  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:39.615957  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:39.655390  188656 cri.go:89] found id: ""
	I0731 21:00:39.655417  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.655424  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:39.655430  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:39.655502  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:39.694180  188656 cri.go:89] found id: ""
	I0731 21:00:39.694213  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.694224  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:39.694231  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:39.694300  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:39.736752  188656 cri.go:89] found id: ""
	I0731 21:00:39.736783  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.736793  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:39.736801  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:39.736860  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:39.775685  188656 cri.go:89] found id: ""
	I0731 21:00:39.775770  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.775790  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:39.775802  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:39.775871  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:39.816790  188656 cri.go:89] found id: ""
	I0731 21:00:39.816820  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.816829  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:39.816835  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:39.816886  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:39.854931  188656 cri.go:89] found id: ""
	I0731 21:00:39.854963  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.854973  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:39.854981  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:39.855045  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:39.891039  188656 cri.go:89] found id: ""
	I0731 21:00:39.891066  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.891074  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:39.891083  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:39.891136  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:39.927434  188656 cri.go:89] found id: ""
	I0731 21:00:39.927463  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.927473  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:39.927483  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:39.927496  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:39.941240  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:39.941272  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:40.017212  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:40.017233  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:40.017246  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:40.094047  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:40.094081  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:40.138940  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:40.138966  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:42.690818  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:42.704855  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:42.704931  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:42.752315  188656 cri.go:89] found id: ""
	I0731 21:00:42.752347  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.752368  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:42.752376  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:42.752445  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:42.790060  188656 cri.go:89] found id: ""
	I0731 21:00:42.790090  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.790101  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:42.790109  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:42.790220  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:42.825504  188656 cri.go:89] found id: ""
	I0731 21:00:42.825532  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.825540  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:42.825547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:42.825598  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:42.860157  188656 cri.go:89] found id: ""
	I0731 21:00:42.860193  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.860204  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:42.860213  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:42.860286  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:42.902914  188656 cri.go:89] found id: ""
	I0731 21:00:42.902947  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.902959  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:42.902967  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:42.903036  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:42.950503  188656 cri.go:89] found id: ""
	I0731 21:00:42.950532  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.950541  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:42.950550  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:42.950603  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:43.010232  188656 cri.go:89] found id: ""
	I0731 21:00:43.010261  188656 logs.go:276] 0 containers: []
	W0731 21:00:43.010272  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:43.010280  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:43.010344  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:43.045487  188656 cri.go:89] found id: ""
	I0731 21:00:43.045517  188656 logs.go:276] 0 containers: []
	W0731 21:00:43.045527  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:43.045539  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:43.045556  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:43.123248  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:43.123279  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:43.123296  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:43.212230  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:43.212272  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:43.254595  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:43.254626  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:43.306187  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:43.306227  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:45.820246  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:45.835707  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:45.835786  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:45.872079  188656 cri.go:89] found id: ""
	I0731 21:00:45.872110  188656 logs.go:276] 0 containers: []
	W0731 21:00:45.872122  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:45.872130  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:45.872196  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:45.910637  188656 cri.go:89] found id: ""
	I0731 21:00:45.910664  188656 logs.go:276] 0 containers: []
	W0731 21:00:45.910672  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:45.910678  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:45.910740  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:45.945316  188656 cri.go:89] found id: ""
	I0731 21:00:45.945360  188656 logs.go:276] 0 containers: []
	W0731 21:00:45.945372  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:45.945380  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:45.945455  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:45.982015  188656 cri.go:89] found id: ""
	I0731 21:00:45.982046  188656 logs.go:276] 0 containers: []
	W0731 21:00:45.982057  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:45.982096  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:45.982165  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:46.017359  188656 cri.go:89] found id: ""
	I0731 21:00:46.017392  188656 logs.go:276] 0 containers: []
	W0731 21:00:46.017404  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:46.017412  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:46.017478  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:46.054401  188656 cri.go:89] found id: ""
	I0731 21:00:46.054431  188656 logs.go:276] 0 containers: []
	W0731 21:00:46.054447  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:46.054454  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:46.054507  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:46.092107  188656 cri.go:89] found id: ""
	I0731 21:00:46.092130  188656 logs.go:276] 0 containers: []
	W0731 21:00:46.092137  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:46.092143  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:46.092190  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:46.128613  188656 cri.go:89] found id: ""
	I0731 21:00:46.128642  188656 logs.go:276] 0 containers: []
	W0731 21:00:46.128652  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:46.128665  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:46.128679  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:46.144539  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:46.144570  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:46.219399  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:46.219433  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:46.219448  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:46.304486  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:46.304529  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:46.344087  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:46.344121  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:48.894728  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:48.916610  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:48.916675  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:48.978515  188656 cri.go:89] found id: ""
	I0731 21:00:48.978543  188656 logs.go:276] 0 containers: []
	W0731 21:00:48.978550  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:48.978557  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:48.978615  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:49.026224  188656 cri.go:89] found id: ""
	I0731 21:00:49.026257  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.026268  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:49.026276  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:49.026354  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:49.064967  188656 cri.go:89] found id: ""
	I0731 21:00:49.064994  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.065003  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:49.065010  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:49.065070  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:49.101966  188656 cri.go:89] found id: ""
	I0731 21:00:49.101990  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.101999  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:49.102004  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:49.102056  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:49.137775  188656 cri.go:89] found id: ""
	I0731 21:00:49.137801  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.137809  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:49.137815  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:49.137867  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:49.173778  188656 cri.go:89] found id: ""
	I0731 21:00:49.173824  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.173832  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:49.173839  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:49.173908  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:49.207211  188656 cri.go:89] found id: ""
	I0731 21:00:49.207239  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.207247  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:49.207254  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:49.207333  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:49.244126  188656 cri.go:89] found id: ""
	I0731 21:00:49.244159  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.244180  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:49.244202  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:49.244221  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:49.299606  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:49.299646  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:49.314093  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:49.314121  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:49.384691  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:49.384712  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:49.384728  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:49.464425  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:49.464462  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:52.005670  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:52.019617  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:52.019705  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:52.053452  188656 cri.go:89] found id: ""
	I0731 21:00:52.053485  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.053494  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:52.053500  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:52.053552  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:52.094462  188656 cri.go:89] found id: ""
	I0731 21:00:52.094495  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.094504  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:52.094510  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:52.094572  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:52.134555  188656 cri.go:89] found id: ""
	I0731 21:00:52.134584  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.134595  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:52.134602  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:52.134676  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:52.168805  188656 cri.go:89] found id: ""
	I0731 21:00:52.168851  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.168863  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:52.168871  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:52.168939  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:52.203093  188656 cri.go:89] found id: ""
	I0731 21:00:52.203121  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.203132  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:52.203140  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:52.203213  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:52.237816  188656 cri.go:89] found id: ""
	I0731 21:00:52.237842  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.237850  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:52.237857  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:52.237906  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:52.272136  188656 cri.go:89] found id: ""
	I0731 21:00:52.272175  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.272194  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:52.272202  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:52.272261  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:52.306616  188656 cri.go:89] found id: ""
	I0731 21:00:52.306641  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.306649  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:52.306659  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:52.306671  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:52.372668  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:52.372690  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:52.372707  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:52.457752  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:52.457794  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:52.496087  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:52.496129  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:52.548137  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:52.548176  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:55.063463  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:55.076922  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:55.077005  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:55.117479  188656 cri.go:89] found id: ""
	I0731 21:00:55.117511  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.117523  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:55.117531  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:55.117595  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:55.156311  188656 cri.go:89] found id: ""
	I0731 21:00:55.156339  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.156348  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:55.156354  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:55.156421  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:55.196778  188656 cri.go:89] found id: ""
	I0731 21:00:55.196807  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.196818  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:55.196826  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:55.196898  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:55.237575  188656 cri.go:89] found id: ""
	I0731 21:00:55.237605  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.237614  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:55.237620  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:55.237672  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:55.271717  188656 cri.go:89] found id: ""
	I0731 21:00:55.271746  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.271754  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:55.271760  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:55.271811  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:55.307586  188656 cri.go:89] found id: ""
	I0731 21:00:55.307618  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.307630  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:55.307637  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:55.307708  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:55.343325  188656 cri.go:89] found id: ""
	I0731 21:00:55.343352  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.343361  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:55.343367  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:55.343418  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:55.378959  188656 cri.go:89] found id: ""
	I0731 21:00:55.378988  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.378997  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:55.379008  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:55.379021  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:55.454213  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:55.454243  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:55.454260  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:55.532802  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:55.532839  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:55.575903  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:55.575940  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:55.635105  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:55.635140  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:58.149801  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:58.162682  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:58.162743  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:58.196220  188656 cri.go:89] found id: ""
	I0731 21:00:58.196245  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.196254  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:58.196260  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:58.196313  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:58.231052  188656 cri.go:89] found id: ""
	I0731 21:00:58.231083  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.231093  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:58.231099  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:58.231156  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:58.265569  188656 cri.go:89] found id: ""
	I0731 21:00:58.265599  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.265612  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:58.265633  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:58.265695  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:58.300750  188656 cri.go:89] found id: ""
	I0731 21:00:58.300779  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.300788  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:58.300793  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:58.300869  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:58.333920  188656 cri.go:89] found id: ""
	I0731 21:00:58.333949  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.333958  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:58.333963  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:58.334015  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:58.368732  188656 cri.go:89] found id: ""
	I0731 21:00:58.368759  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.368771  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:58.368787  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:58.368855  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:58.408454  188656 cri.go:89] found id: ""
	I0731 21:00:58.408488  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.408501  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:58.408510  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:58.408575  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:58.445855  188656 cri.go:89] found id: ""
	I0731 21:00:58.445888  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.445900  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:58.445913  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:58.445934  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:58.496144  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:58.496177  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:58.510708  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:58.510743  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:58.580690  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:58.580712  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:58.580725  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:58.657281  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:58.657320  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:01.196374  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:01.209044  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:01.209111  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:01.247313  188656 cri.go:89] found id: ""
	I0731 21:01:01.247343  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.247353  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:01.247360  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:01.247443  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:01.282269  188656 cri.go:89] found id: ""
	I0731 21:01:01.282300  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.282308  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:01.282314  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:01.282370  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:01.315598  188656 cri.go:89] found id: ""
	I0731 21:01:01.315628  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.315638  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:01.315644  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:01.315697  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:01.352492  188656 cri.go:89] found id: ""
	I0731 21:01:01.352521  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.352533  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:01.352540  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:01.352605  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:01.387858  188656 cri.go:89] found id: ""
	I0731 21:01:01.387885  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.387894  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:01.387900  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:01.387950  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:01.425014  188656 cri.go:89] found id: ""
	I0731 21:01:01.425042  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.425052  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:01.425061  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:01.425129  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:01.463068  188656 cri.go:89] found id: ""
	I0731 21:01:01.463098  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.463107  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:01.463113  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:01.463171  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:01.500174  188656 cri.go:89] found id: ""
	I0731 21:01:01.500203  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.500214  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:01.500229  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:01.500244  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:01.554350  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:01.554389  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:01.569353  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:01.569394  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:01.641074  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:01.641095  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:01.641108  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:01.722340  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:01.722377  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:04.264035  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:04.278374  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:04.278441  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:04.314037  188656 cri.go:89] found id: ""
	I0731 21:01:04.314068  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.314079  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:04.314087  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:04.314159  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:04.347604  188656 cri.go:89] found id: ""
	I0731 21:01:04.347635  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.347646  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:04.347653  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:04.347718  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:04.382412  188656 cri.go:89] found id: ""
	I0731 21:01:04.382442  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.382454  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:04.382462  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:04.382516  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:04.419097  188656 cri.go:89] found id: ""
	I0731 21:01:04.419130  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.419142  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:04.419150  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:04.419209  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:04.464561  188656 cri.go:89] found id: ""
	I0731 21:01:04.464592  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.464601  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:04.464607  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:04.464683  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:04.500484  188656 cri.go:89] found id: ""
	I0731 21:01:04.500510  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.500518  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:04.500524  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:04.500577  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:04.536211  188656 cri.go:89] found id: ""
	I0731 21:01:04.536239  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.536250  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:04.536257  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:04.536324  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:04.569521  188656 cri.go:89] found id: ""
	I0731 21:01:04.569548  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.569556  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:04.569567  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:04.569583  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:04.621228  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:04.621261  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:04.637500  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:04.637527  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:04.710577  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:04.710606  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:04.710623  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:04.788305  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:04.788343  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:07.329209  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:07.343021  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:07.343089  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:07.378556  188656 cri.go:89] found id: ""
	I0731 21:01:07.378588  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.378603  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:07.378610  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:07.378679  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:07.416419  188656 cri.go:89] found id: ""
	I0731 21:01:07.416455  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.416467  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:07.416474  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:07.416538  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:07.454720  188656 cri.go:89] found id: ""
	I0731 21:01:07.454749  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.454758  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:07.454764  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:07.454815  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:07.488963  188656 cri.go:89] found id: ""
	I0731 21:01:07.488995  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.489004  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:07.489009  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:07.489060  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:07.531916  188656 cri.go:89] found id: ""
	I0731 21:01:07.531949  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.531961  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:07.531967  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:07.532019  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:07.569233  188656 cri.go:89] found id: ""
	I0731 21:01:07.569266  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.569275  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:07.569281  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:07.569350  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:07.606318  188656 cri.go:89] found id: ""
	I0731 21:01:07.606349  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.606360  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:07.606368  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:07.606442  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:07.641408  188656 cri.go:89] found id: ""
	I0731 21:01:07.641436  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.641445  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:07.641454  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:07.641466  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:07.681094  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:07.681123  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:07.734600  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:07.734641  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:07.748747  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:07.748779  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:07.821775  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:07.821799  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:07.821816  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:10.399973  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:10.412908  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:10.412986  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:10.448866  188656 cri.go:89] found id: ""
	I0731 21:01:10.448895  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.448903  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:10.448909  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:10.448966  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:10.486309  188656 cri.go:89] found id: ""
	I0731 21:01:10.486338  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.486346  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:10.486352  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:10.486411  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:10.522834  188656 cri.go:89] found id: ""
	I0731 21:01:10.522856  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.522863  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:10.522870  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:10.522929  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:10.558272  188656 cri.go:89] found id: ""
	I0731 21:01:10.558304  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.558324  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:10.558330  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:10.558391  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:10.596560  188656 cri.go:89] found id: ""
	I0731 21:01:10.596589  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.596600  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:10.596608  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:10.596668  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:10.633488  188656 cri.go:89] found id: ""
	I0731 21:01:10.633518  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.633529  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:10.633537  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:10.633597  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:10.665779  188656 cri.go:89] found id: ""
	I0731 21:01:10.665812  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.665824  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:10.665832  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:10.665895  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:10.700526  188656 cri.go:89] found id: ""
	I0731 21:01:10.700556  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.700564  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:10.700575  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:10.700587  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:10.753507  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:10.753550  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:10.768056  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:10.768089  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:10.842120  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:10.842142  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:10.842159  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:10.916532  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:10.916565  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:13.456826  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:13.471064  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:13.471130  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:13.505660  188656 cri.go:89] found id: ""
	I0731 21:01:13.505694  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.505707  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:13.505713  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:13.505775  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:13.543084  188656 cri.go:89] found id: ""
	I0731 21:01:13.543109  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.543117  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:13.543123  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:13.543182  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:13.578940  188656 cri.go:89] found id: ""
	I0731 21:01:13.578966  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.578974  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:13.578981  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:13.579047  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:13.617710  188656 cri.go:89] found id: ""
	I0731 21:01:13.617733  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.617740  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:13.617747  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:13.617810  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:13.653535  188656 cri.go:89] found id: ""
	I0731 21:01:13.653567  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.653579  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:13.653587  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:13.653658  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:13.687914  188656 cri.go:89] found id: ""
	I0731 21:01:13.687942  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.687953  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:13.687960  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:13.688031  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:13.725242  188656 cri.go:89] found id: ""
	I0731 21:01:13.725278  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.725287  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:13.725293  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:13.725372  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:13.760890  188656 cri.go:89] found id: ""
	I0731 21:01:13.760918  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.760929  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:13.760943  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:13.760958  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:13.810212  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:13.810252  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:13.824229  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:13.824259  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:13.895306  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:13.895331  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:13.895344  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:13.976366  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:13.976411  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:16.520165  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:16.533970  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:16.534035  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:16.571444  188656 cri.go:89] found id: ""
	I0731 21:01:16.571474  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.571482  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:16.571488  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:16.571539  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:16.608150  188656 cri.go:89] found id: ""
	I0731 21:01:16.608176  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.608186  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:16.608194  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:16.608254  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:16.643252  188656 cri.go:89] found id: ""
	I0731 21:01:16.643283  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.643294  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:16.643302  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:16.643363  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:16.679521  188656 cri.go:89] found id: ""
	I0731 21:01:16.679552  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.679563  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:16.679571  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:16.679624  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:16.713502  188656 cri.go:89] found id: ""
	I0731 21:01:16.713532  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.713541  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:16.713547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:16.713624  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:16.748276  188656 cri.go:89] found id: ""
	I0731 21:01:16.748309  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.748318  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:16.748324  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:16.748383  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:16.783895  188656 cri.go:89] found id: ""
	I0731 21:01:16.783929  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.783940  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:16.783948  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:16.784014  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:16.817362  188656 cri.go:89] found id: ""
	I0731 21:01:16.817392  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.817415  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:16.817425  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:16.817440  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:16.872584  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:16.872637  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:16.887240  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:16.887275  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:16.961920  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:16.961949  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:16.961967  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:17.041889  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:17.041924  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:19.585935  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:19.600389  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:19.600475  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:19.635883  188656 cri.go:89] found id: ""
	I0731 21:01:19.635913  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.635924  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:19.635932  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:19.635995  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:19.674413  188656 cri.go:89] found id: ""
	I0731 21:01:19.674441  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.674459  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:19.674471  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:19.674538  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:19.708181  188656 cri.go:89] found id: ""
	I0731 21:01:19.708211  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.708219  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:19.708224  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:19.708292  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:19.744737  188656 cri.go:89] found id: ""
	I0731 21:01:19.744774  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.744783  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:19.744791  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:19.744849  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:19.784366  188656 cri.go:89] found id: ""
	I0731 21:01:19.784398  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.784406  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:19.784412  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:19.784465  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:19.819234  188656 cri.go:89] found id: ""
	I0731 21:01:19.819269  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.819280  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:19.819289  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:19.819355  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:19.851462  188656 cri.go:89] found id: ""
	I0731 21:01:19.851494  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.851503  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:19.851510  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:19.851563  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:19.896575  188656 cri.go:89] found id: ""
	I0731 21:01:19.896604  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.896612  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:19.896624  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:19.896640  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:19.952239  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:19.952284  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:19.969411  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:19.969442  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:20.042820  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:20.042847  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:20.042863  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:20.130070  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:20.130115  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:22.674956  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:22.688548  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:22.688616  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:22.728750  188656 cri.go:89] found id: ""
	I0731 21:01:22.728775  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.728784  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:22.728790  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:22.728844  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:22.763765  188656 cri.go:89] found id: ""
	I0731 21:01:22.763793  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.763801  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:22.763807  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:22.763858  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:22.799134  188656 cri.go:89] found id: ""
	I0731 21:01:22.799163  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.799172  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:22.799178  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:22.799237  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:22.833972  188656 cri.go:89] found id: ""
	I0731 21:01:22.833998  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.834005  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:22.834011  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:22.834060  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:22.869686  188656 cri.go:89] found id: ""
	I0731 21:01:22.869711  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.869719  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:22.869724  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:22.869776  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:22.907919  188656 cri.go:89] found id: ""
	I0731 21:01:22.907950  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.907961  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:22.907969  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:22.908035  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:22.947162  188656 cri.go:89] found id: ""
	I0731 21:01:22.947192  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.947204  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:22.947212  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:22.947273  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:22.992822  188656 cri.go:89] found id: ""
	I0731 21:01:22.992860  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.992872  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:22.992884  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:22.992900  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:23.045552  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:23.045589  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:23.059895  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:23.059925  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:23.135535  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:23.135561  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:23.135577  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:23.217468  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:23.217521  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:25.771615  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:25.785037  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:25.785115  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:25.821070  188656 cri.go:89] found id: ""
	I0731 21:01:25.821100  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.821112  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:25.821120  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:25.821176  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:25.856174  188656 cri.go:89] found id: ""
	I0731 21:01:25.856206  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.856217  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:25.856225  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:25.856288  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:25.889440  188656 cri.go:89] found id: ""
	I0731 21:01:25.889473  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.889483  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:25.889490  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:25.889546  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:25.924770  188656 cri.go:89] found id: ""
	I0731 21:01:25.924796  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.924804  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:25.924811  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:25.924860  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:25.963529  188656 cri.go:89] found id: ""
	I0731 21:01:25.963576  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.963588  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:25.963595  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:25.963670  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:26.000033  188656 cri.go:89] found id: ""
	I0731 21:01:26.000060  188656 logs.go:276] 0 containers: []
	W0731 21:01:26.000069  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:26.000076  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:26.000133  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:26.035310  188656 cri.go:89] found id: ""
	I0731 21:01:26.035341  188656 logs.go:276] 0 containers: []
	W0731 21:01:26.035353  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:26.035359  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:26.035423  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:26.070096  188656 cri.go:89] found id: ""
	I0731 21:01:26.070119  188656 logs.go:276] 0 containers: []
	W0731 21:01:26.070127  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:26.070138  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:26.070149  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:26.141198  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:26.141220  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:26.141237  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:26.219766  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:26.219805  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:26.264836  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:26.264864  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:26.316672  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:26.316709  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:28.832882  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:28.846243  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:28.846307  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:28.880312  188656 cri.go:89] found id: ""
	I0731 21:01:28.880339  188656 logs.go:276] 0 containers: []
	W0731 21:01:28.880350  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:28.880358  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:28.880419  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:28.914625  188656 cri.go:89] found id: ""
	I0731 21:01:28.914652  188656 logs.go:276] 0 containers: []
	W0731 21:01:28.914660  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:28.914667  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:28.914726  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:28.949138  188656 cri.go:89] found id: ""
	I0731 21:01:28.949173  188656 logs.go:276] 0 containers: []
	W0731 21:01:28.949185  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:28.949192  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:28.949264  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:28.985229  188656 cri.go:89] found id: ""
	I0731 21:01:28.985258  188656 logs.go:276] 0 containers: []
	W0731 21:01:28.985266  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:28.985272  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:28.985326  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:29.021520  188656 cri.go:89] found id: ""
	I0731 21:01:29.021550  188656 logs.go:276] 0 containers: []
	W0731 21:01:29.021562  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:29.021568  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:29.021629  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:29.058639  188656 cri.go:89] found id: ""
	I0731 21:01:29.058671  188656 logs.go:276] 0 containers: []
	W0731 21:01:29.058682  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:29.058690  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:29.058755  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:29.105435  188656 cri.go:89] found id: ""
	I0731 21:01:29.105458  188656 logs.go:276] 0 containers: []
	W0731 21:01:29.105466  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:29.105472  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:29.105528  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:29.147118  188656 cri.go:89] found id: ""
	I0731 21:01:29.147144  188656 logs.go:276] 0 containers: []
	W0731 21:01:29.147152  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:29.147161  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:29.147177  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:29.231698  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:29.231735  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:29.276163  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:29.276200  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:29.330551  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:29.330589  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:29.350293  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:29.350323  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:29.456073  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:31.956964  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:31.970712  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:31.970780  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:32.009546  188656 cri.go:89] found id: ""
	I0731 21:01:32.009574  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.009585  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:32.009593  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:32.009674  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:32.046622  188656 cri.go:89] found id: ""
	I0731 21:01:32.046661  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.046672  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:32.046680  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:32.046748  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:32.080958  188656 cri.go:89] found id: ""
	I0731 21:01:32.080985  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.080993  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:32.080998  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:32.081052  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:32.117454  188656 cri.go:89] found id: ""
	I0731 21:01:32.117480  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.117489  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:32.117495  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:32.117561  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:32.152335  188656 cri.go:89] found id: ""
	I0731 21:01:32.152369  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.152380  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:32.152387  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:32.152441  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:32.186631  188656 cri.go:89] found id: ""
	I0731 21:01:32.186670  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.186682  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:32.186691  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:32.186761  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:32.221496  188656 cri.go:89] found id: ""
	I0731 21:01:32.221533  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.221544  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:32.221551  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:32.221632  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:32.256315  188656 cri.go:89] found id: ""
	I0731 21:01:32.256341  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.256350  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:32.256360  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:32.256372  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:32.295759  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:32.295788  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:32.347855  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:32.347888  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:32.360982  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:32.361012  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:32.433900  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:32.433926  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:32.433947  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:35.013369  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:35.027203  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:35.027298  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:35.065567  188656 cri.go:89] found id: ""
	I0731 21:01:35.065599  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.065610  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:35.065617  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:35.065686  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:35.104285  188656 cri.go:89] found id: ""
	I0731 21:01:35.104317  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.104328  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:35.104335  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:35.104430  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:35.151081  188656 cri.go:89] found id: ""
	I0731 21:01:35.151108  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.151119  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:35.151127  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:35.151190  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:35.196844  188656 cri.go:89] found id: ""
	I0731 21:01:35.196875  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.196886  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:35.196894  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:35.196964  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:35.253581  188656 cri.go:89] found id: ""
	I0731 21:01:35.253612  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.253623  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:35.253630  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:35.253703  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:35.295791  188656 cri.go:89] found id: ""
	I0731 21:01:35.295819  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.295830  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:35.295838  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:35.295904  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:35.329405  188656 cri.go:89] found id: ""
	I0731 21:01:35.329441  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.329454  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:35.329462  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:35.329526  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:35.363976  188656 cri.go:89] found id: ""
	I0731 21:01:35.364009  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.364022  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:35.364035  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:35.364051  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:35.421213  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:35.421253  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:35.436612  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:35.436646  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:35.514154  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:35.514182  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:35.514197  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:35.588048  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:35.588082  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:38.133466  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:38.147071  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:38.147142  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:38.179992  188656 cri.go:89] found id: ""
	I0731 21:01:38.180024  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.180036  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:38.180044  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:38.180116  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:38.213784  188656 cri.go:89] found id: ""
	I0731 21:01:38.213816  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.213827  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:38.213834  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:38.213901  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:38.254190  188656 cri.go:89] found id: ""
	I0731 21:01:38.254220  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.254229  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:38.254235  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:38.254284  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:38.289695  188656 cri.go:89] found id: ""
	I0731 21:01:38.289732  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.289743  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:38.289751  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:38.289819  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:38.327743  188656 cri.go:89] found id: ""
	I0731 21:01:38.327777  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.327788  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:38.327797  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:38.327853  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:38.361373  188656 cri.go:89] found id: ""
	I0731 21:01:38.361409  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.361421  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:38.361428  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:38.361501  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:38.396832  188656 cri.go:89] found id: ""
	I0731 21:01:38.396860  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.396868  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:38.396873  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:38.396923  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:38.431822  188656 cri.go:89] found id: ""
	I0731 21:01:38.431855  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.431868  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:38.431880  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:38.431895  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:38.481994  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:38.482028  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:38.495885  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:38.495911  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:38.563384  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:38.563411  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:38.563437  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:38.646806  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:38.646848  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:41.187323  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:41.200995  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:41.201063  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:41.241620  188656 cri.go:89] found id: ""
	I0731 21:01:41.241651  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.241663  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:41.241671  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:41.241745  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:41.279565  188656 cri.go:89] found id: ""
	I0731 21:01:41.279595  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.279604  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:41.279609  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:41.279666  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:41.320710  188656 cri.go:89] found id: ""
	I0731 21:01:41.320744  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.320755  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:41.320763  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:41.320834  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:41.356428  188656 cri.go:89] found id: ""
	I0731 21:01:41.356460  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.356472  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:41.356480  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:41.356544  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:41.390493  188656 cri.go:89] found id: ""
	I0731 21:01:41.390525  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.390536  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:41.390544  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:41.390612  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:41.424244  188656 cri.go:89] found id: ""
	I0731 21:01:41.424271  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.424282  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:41.424290  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:41.424350  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:41.459916  188656 cri.go:89] found id: ""
	I0731 21:01:41.459946  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.459955  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:41.459961  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:41.460012  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:41.493891  188656 cri.go:89] found id: ""
	I0731 21:01:41.493917  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.493926  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:41.493936  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:41.493950  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:41.544066  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:41.544106  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:41.558504  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:41.558534  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:41.632996  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:41.633021  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:41.633039  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:41.712637  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:41.712677  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:44.255947  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:44.268961  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:44.269050  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:44.304621  188656 cri.go:89] found id: ""
	I0731 21:01:44.304656  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.304668  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:44.304676  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:44.304732  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:44.339389  188656 cri.go:89] found id: ""
	I0731 21:01:44.339429  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.339441  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:44.339448  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:44.339510  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:44.373069  188656 cri.go:89] found id: ""
	I0731 21:01:44.373095  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.373103  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:44.373110  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:44.373179  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:44.408784  188656 cri.go:89] found id: ""
	I0731 21:01:44.408812  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.408821  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:44.408829  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:44.408896  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:44.445636  188656 cri.go:89] found id: ""
	I0731 21:01:44.445671  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.445682  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:44.445690  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:44.445759  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:44.483529  188656 cri.go:89] found id: ""
	I0731 21:01:44.483565  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.483577  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:44.483585  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:44.483643  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:44.517959  188656 cri.go:89] found id: ""
	I0731 21:01:44.517980  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.517987  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:44.517993  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:44.518042  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:44.552322  188656 cri.go:89] found id: ""
	I0731 21:01:44.552367  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.552392  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:44.552405  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:44.552421  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:44.625005  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:44.625030  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:44.625043  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:44.702547  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:44.702585  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:44.741754  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:44.741792  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:44.795179  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:44.795216  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:47.309995  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:47.323993  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:47.324076  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:47.365546  188656 cri.go:89] found id: ""
	I0731 21:01:47.365576  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.365587  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:47.365595  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:47.365682  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:47.402774  188656 cri.go:89] found id: ""
	I0731 21:01:47.402810  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.402822  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:47.402831  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:47.402899  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:47.440716  188656 cri.go:89] found id: ""
	I0731 21:01:47.440746  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.440755  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:47.440761  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:47.440811  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:47.479418  188656 cri.go:89] found id: ""
	I0731 21:01:47.479450  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.479461  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:47.479469  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:47.479535  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:47.514027  188656 cri.go:89] found id: ""
	I0731 21:01:47.514065  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.514074  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:47.514081  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:47.514149  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:47.550178  188656 cri.go:89] found id: ""
	I0731 21:01:47.550203  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.550212  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:47.550218  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:47.550271  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:47.587844  188656 cri.go:89] found id: ""
	I0731 21:01:47.587873  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.587883  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:47.587891  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:47.587945  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:47.627581  188656 cri.go:89] found id: ""
	I0731 21:01:47.627608  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.627620  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:47.627633  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:47.627647  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:47.683364  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:47.683408  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:47.697882  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:47.697917  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:47.773804  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:47.773834  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:47.773848  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:47.859356  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:47.859404  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:50.402403  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:50.417269  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:50.417332  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:50.452762  188656 cri.go:89] found id: ""
	I0731 21:01:50.452786  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.452793  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:50.452799  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:50.452852  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:50.486741  188656 cri.go:89] found id: ""
	I0731 21:01:50.486771  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.486782  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:50.486789  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:50.486855  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:50.526144  188656 cri.go:89] found id: ""
	I0731 21:01:50.526174  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.526185  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:50.526193  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:50.526246  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:50.560957  188656 cri.go:89] found id: ""
	I0731 21:01:50.560985  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.560995  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:50.561003  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:50.561065  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:50.597228  188656 cri.go:89] found id: ""
	I0731 21:01:50.597258  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.597269  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:50.597275  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:50.597357  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:50.638153  188656 cri.go:89] found id: ""
	I0731 21:01:50.638183  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.638199  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:50.638208  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:50.638270  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:50.672236  188656 cri.go:89] found id: ""
	I0731 21:01:50.672266  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.672274  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:50.672280  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:50.672340  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:50.704069  188656 cri.go:89] found id: ""
	I0731 21:01:50.704093  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.704102  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:50.704112  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:50.704125  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:50.757973  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:50.758010  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:50.771203  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:50.771229  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:50.842937  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:50.842956  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:50.842969  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:50.925819  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:50.925857  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:53.470691  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:53.485260  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:53.485332  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:53.524110  188656 cri.go:89] found id: ""
	I0731 21:01:53.524139  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.524148  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:53.524154  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:53.524215  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:53.557642  188656 cri.go:89] found id: ""
	I0731 21:01:53.557668  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.557676  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:53.557682  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:53.557737  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:53.595594  188656 cri.go:89] found id: ""
	I0731 21:01:53.595622  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.595641  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:53.595647  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:53.595712  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:53.634458  188656 cri.go:89] found id: ""
	I0731 21:01:53.634487  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.634499  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:53.634507  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:53.634567  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:53.674124  188656 cri.go:89] found id: ""
	I0731 21:01:53.674149  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.674157  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:53.674164  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:53.674234  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:53.706861  188656 cri.go:89] found id: ""
	I0731 21:01:53.706888  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.706897  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:53.706903  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:53.706957  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:53.745476  188656 cri.go:89] found id: ""
	I0731 21:01:53.745504  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.745511  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:53.745522  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:53.745575  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:53.780847  188656 cri.go:89] found id: ""
	I0731 21:01:53.780878  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.780889  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:53.780902  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:53.780922  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:53.853469  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:53.853497  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:53.853517  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:53.930506  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:53.930544  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:53.975439  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:53.975475  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:54.027903  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:54.027937  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:56.542860  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:56.557744  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:56.557813  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:56.596034  188656 cri.go:89] found id: ""
	I0731 21:01:56.596065  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.596075  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:56.596082  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:56.596146  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:56.631531  188656 cri.go:89] found id: ""
	I0731 21:01:56.631561  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.631572  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:56.631579  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:56.631653  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:56.665824  188656 cri.go:89] found id: ""
	I0731 21:01:56.665853  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.665865  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:56.665872  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:56.665940  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:56.698965  188656 cri.go:89] found id: ""
	I0731 21:01:56.698993  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.699002  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:56.699008  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:56.699074  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:56.735314  188656 cri.go:89] found id: ""
	I0731 21:01:56.735347  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.735359  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:56.735367  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:56.735443  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:56.770350  188656 cri.go:89] found id: ""
	I0731 21:01:56.770383  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.770393  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:56.770402  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:56.770485  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:56.808934  188656 cri.go:89] found id: ""
	I0731 21:01:56.808962  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.808970  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:56.808976  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:56.809027  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:56.845305  188656 cri.go:89] found id: ""
	I0731 21:01:56.845331  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.845354  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:56.845366  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:56.845383  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:56.922810  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:56.922832  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:56.922846  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:56.998009  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:56.998046  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:57.037905  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:57.037934  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:57.092438  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:57.092469  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:59.608087  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:59.622465  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:59.622537  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:59.660221  188656 cri.go:89] found id: ""
	I0731 21:01:59.660254  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.660265  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:59.660274  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:59.660338  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:59.696158  188656 cri.go:89] found id: ""
	I0731 21:01:59.696193  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.696205  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:59.696213  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:59.696272  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:59.733607  188656 cri.go:89] found id: ""
	I0731 21:01:59.733635  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.733646  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:59.733656  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:59.733727  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:59.770298  188656 cri.go:89] found id: ""
	I0731 21:01:59.770327  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.770336  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:59.770342  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:59.770396  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:59.805630  188656 cri.go:89] found id: ""
	I0731 21:01:59.805659  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.805670  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:59.805682  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:59.805749  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:59.841064  188656 cri.go:89] found id: ""
	I0731 21:01:59.841089  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.841098  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:59.841106  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:59.841166  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:59.877237  188656 cri.go:89] found id: ""
	I0731 21:01:59.877265  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.877274  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:59.877284  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:59.877364  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:59.917102  188656 cri.go:89] found id: ""
	I0731 21:01:59.917138  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.917166  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:59.917179  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:59.917196  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:59.971806  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:59.971846  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:59.986267  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:59.986304  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:00.063185  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:00.063227  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:00.063244  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:00.148498  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:00.148541  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:02.690235  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:02.704623  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:02.704703  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:02.740557  188656 cri.go:89] found id: ""
	I0731 21:02:02.740588  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.740599  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:02.740606  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:02.740667  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:02.776340  188656 cri.go:89] found id: ""
	I0731 21:02:02.776382  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.776391  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:02.776396  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:02.776449  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:02.811645  188656 cri.go:89] found id: ""
	I0731 21:02:02.811673  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.811683  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:02.811691  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:02.811754  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:02.847226  188656 cri.go:89] found id: ""
	I0731 21:02:02.847259  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.847267  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:02.847273  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:02.847326  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:02.885591  188656 cri.go:89] found id: ""
	I0731 21:02:02.885617  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.885626  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:02.885631  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:02.885694  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:02.924250  188656 cri.go:89] found id: ""
	I0731 21:02:02.924281  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.924289  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:02.924296  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:02.924358  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:02.959608  188656 cri.go:89] found id: ""
	I0731 21:02:02.959638  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.959649  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:02.959657  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:02.959731  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:02.998175  188656 cri.go:89] found id: ""
	I0731 21:02:02.998205  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.998215  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:02.998228  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:02.998248  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:03.053320  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:03.053382  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:03.067681  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:03.067711  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:03.145222  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:03.145251  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:03.145270  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:03.228413  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:03.228456  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:05.780407  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:05.793872  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:05.793952  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:05.828940  188656 cri.go:89] found id: ""
	I0731 21:02:05.828971  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.828980  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:05.828987  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:05.829051  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:05.866470  188656 cri.go:89] found id: ""
	I0731 21:02:05.866503  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.866515  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:05.866522  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:05.866594  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:05.904756  188656 cri.go:89] found id: ""
	I0731 21:02:05.904792  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.904807  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:05.904814  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:05.904868  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:05.941534  188656 cri.go:89] found id: ""
	I0731 21:02:05.941564  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.941574  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:05.941581  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:05.941649  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:05.980413  188656 cri.go:89] found id: ""
	I0731 21:02:05.980453  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.980465  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:05.980472  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:05.980563  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:06.023226  188656 cri.go:89] found id: ""
	I0731 21:02:06.023258  188656 logs.go:276] 0 containers: []
	W0731 21:02:06.023269  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:06.023277  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:06.023345  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:06.061098  188656 cri.go:89] found id: ""
	I0731 21:02:06.061130  188656 logs.go:276] 0 containers: []
	W0731 21:02:06.061138  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:06.061145  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:06.061195  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:06.097825  188656 cri.go:89] found id: ""
	I0731 21:02:06.097852  188656 logs.go:276] 0 containers: []
	W0731 21:02:06.097860  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:06.097870  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:06.097883  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:06.149181  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:06.149223  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:06.164610  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:06.164651  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:06.248639  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:06.248666  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:06.248684  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:06.332445  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:06.332486  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:08.873697  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:08.887632  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:08.887745  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:08.926002  188656 cri.go:89] found id: ""
	I0731 21:02:08.926032  188656 logs.go:276] 0 containers: []
	W0731 21:02:08.926042  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:08.926051  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:08.926117  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:08.962999  188656 cri.go:89] found id: ""
	I0731 21:02:08.963028  188656 logs.go:276] 0 containers: []
	W0731 21:02:08.963039  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:08.963047  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:08.963103  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:09.023016  188656 cri.go:89] found id: ""
	I0731 21:02:09.023043  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.023051  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:09.023057  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:09.023109  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:09.059672  188656 cri.go:89] found id: ""
	I0731 21:02:09.059699  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.059708  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:09.059714  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:09.059774  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:09.097603  188656 cri.go:89] found id: ""
	I0731 21:02:09.097635  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.097645  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:09.097653  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:09.097720  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:09.136210  188656 cri.go:89] found id: ""
	I0731 21:02:09.136240  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.136251  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:09.136259  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:09.136326  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:09.176167  188656 cri.go:89] found id: ""
	I0731 21:02:09.176204  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.176211  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:09.176218  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:09.176277  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:09.214151  188656 cri.go:89] found id: ""
	I0731 21:02:09.214180  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.214189  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:09.214199  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:09.214212  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:09.267579  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:09.267618  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:09.282420  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:09.282445  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:09.354067  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:09.354092  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:09.354111  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:09.433454  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:09.433500  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:11.979715  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:11.993050  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:11.993123  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:12.027731  188656 cri.go:89] found id: ""
	I0731 21:02:12.027759  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.027767  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:12.027773  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:12.027834  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:12.064410  188656 cri.go:89] found id: ""
	I0731 21:02:12.064442  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.064452  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:12.064459  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:12.064525  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:12.101061  188656 cri.go:89] found id: ""
	I0731 21:02:12.101096  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.101107  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:12.101115  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:12.101176  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:12.142240  188656 cri.go:89] found id: ""
	I0731 21:02:12.142271  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.142284  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:12.142292  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:12.142357  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:12.184949  188656 cri.go:89] found id: ""
	I0731 21:02:12.184980  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.184988  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:12.184994  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:12.185064  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:12.226031  188656 cri.go:89] found id: ""
	I0731 21:02:12.226068  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.226080  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:12.226089  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:12.226155  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:12.272880  188656 cri.go:89] found id: ""
	I0731 21:02:12.272913  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.272923  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:12.272931  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:12.272989  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:12.306968  188656 cri.go:89] found id: ""
	I0731 21:02:12.307011  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.307033  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:12.307068  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:12.307090  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:12.359357  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:12.359402  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:12.374817  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:12.374848  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:12.445107  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:12.445128  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:12.445141  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:12.530017  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:12.530058  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:15.070277  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:15.084326  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:15.084411  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:15.123513  188656 cri.go:89] found id: ""
	I0731 21:02:15.123549  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.123562  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:15.123569  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:15.123624  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:15.159855  188656 cri.go:89] found id: ""
	I0731 21:02:15.159888  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.159899  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:15.159908  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:15.159973  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:15.195879  188656 cri.go:89] found id: ""
	I0731 21:02:15.195911  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.195919  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:15.195926  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:15.195986  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:15.231216  188656 cri.go:89] found id: ""
	I0731 21:02:15.231249  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.231258  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:15.231265  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:15.231331  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:15.265711  188656 cri.go:89] found id: ""
	I0731 21:02:15.265740  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.265748  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:15.265754  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:15.265803  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:15.300991  188656 cri.go:89] found id: ""
	I0731 21:02:15.301020  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.301027  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:15.301033  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:15.301083  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:15.338507  188656 cri.go:89] found id: ""
	I0731 21:02:15.338533  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.338542  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:15.338550  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:15.338614  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:15.375540  188656 cri.go:89] found id: ""
	I0731 21:02:15.375583  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.375595  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:15.375606  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:15.375631  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:15.428903  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:15.428946  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:15.444018  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:15.444052  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:15.518807  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:15.518842  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:15.518859  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:15.602655  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:15.602693  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:18.158731  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:18.172861  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:18.172940  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:18.207451  188656 cri.go:89] found id: ""
	I0731 21:02:18.207480  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.207489  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:18.207495  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:18.207555  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:18.244974  188656 cri.go:89] found id: ""
	I0731 21:02:18.245004  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.245013  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:18.245019  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:18.245079  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:18.281589  188656 cri.go:89] found id: ""
	I0731 21:02:18.281622  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.281630  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:18.281637  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:18.281698  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:18.321413  188656 cri.go:89] found id: ""
	I0731 21:02:18.321445  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.321455  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:18.321461  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:18.321526  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:18.360600  188656 cri.go:89] found id: ""
	I0731 21:02:18.360627  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.360639  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:18.360647  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:18.360707  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:18.396312  188656 cri.go:89] found id: ""
	I0731 21:02:18.396344  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.396356  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:18.396364  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:18.396451  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:18.431586  188656 cri.go:89] found id: ""
	I0731 21:02:18.431618  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.431630  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:18.431637  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:18.431711  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:18.472995  188656 cri.go:89] found id: ""
	I0731 21:02:18.473025  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.473035  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:18.473047  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:18.473063  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:18.558826  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:18.558865  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:18.600083  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:18.600110  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:18.657944  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:18.657988  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:18.672860  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:18.672888  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:18.748806  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:21.249418  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:21.263304  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:21.263385  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:21.298591  188656 cri.go:89] found id: ""
	I0731 21:02:21.298624  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.298635  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:21.298643  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:21.298707  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:21.335913  188656 cri.go:89] found id: ""
	I0731 21:02:21.335939  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.335947  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:21.335954  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:21.336011  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:21.378314  188656 cri.go:89] found id: ""
	I0731 21:02:21.378347  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.378359  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:21.378368  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:21.378436  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:21.422707  188656 cri.go:89] found id: ""
	I0731 21:02:21.422738  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.422748  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:21.422757  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:21.422826  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:21.487851  188656 cri.go:89] found id: ""
	I0731 21:02:21.487878  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.487887  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:21.487893  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:21.487946  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:21.528944  188656 cri.go:89] found id: ""
	I0731 21:02:21.528970  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.528981  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:21.528990  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:21.529054  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:21.565091  188656 cri.go:89] found id: ""
	I0731 21:02:21.565118  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.565126  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:21.565132  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:21.565182  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:21.599985  188656 cri.go:89] found id: ""
	I0731 21:02:21.600015  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.600027  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:21.600041  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:21.600057  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:21.652065  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:21.652106  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:21.666497  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:21.666528  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:21.741853  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:21.741893  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:21.741919  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:21.822478  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:21.822517  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:24.363018  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:24.375640  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:24.375704  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:24.411383  188656 cri.go:89] found id: ""
	I0731 21:02:24.411416  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.411427  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:24.411436  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:24.411513  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:24.447536  188656 cri.go:89] found id: ""
	I0731 21:02:24.447565  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.447573  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:24.447578  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:24.447651  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:24.489270  188656 cri.go:89] found id: ""
	I0731 21:02:24.489301  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.489311  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:24.489320  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:24.489398  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:24.527891  188656 cri.go:89] found id: ""
	I0731 21:02:24.527922  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.527932  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:24.527938  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:24.527998  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:24.566854  188656 cri.go:89] found id: ""
	I0731 21:02:24.566886  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.566897  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:24.566904  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:24.566974  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:24.606234  188656 cri.go:89] found id: ""
	I0731 21:02:24.606267  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.606278  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:24.606285  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:24.606357  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:24.642880  188656 cri.go:89] found id: ""
	I0731 21:02:24.642909  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.642921  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:24.642929  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:24.642982  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:24.680069  188656 cri.go:89] found id: ""
	I0731 21:02:24.680101  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.680112  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:24.680124  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:24.680142  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:24.735337  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:24.735378  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:24.749010  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:24.749040  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:24.826406  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:24.826441  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:24.826458  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:24.906995  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:24.907049  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:27.451405  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:27.474178  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:27.474251  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:27.514912  188656 cri.go:89] found id: ""
	I0731 21:02:27.514938  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.514945  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:27.514951  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:27.515007  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:27.552850  188656 cri.go:89] found id: ""
	I0731 21:02:27.552880  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.552890  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:27.552896  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:27.552953  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:27.590468  188656 cri.go:89] found id: ""
	I0731 21:02:27.590496  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.590503  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:27.590509  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:27.590572  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:27.626295  188656 cri.go:89] found id: ""
	I0731 21:02:27.626322  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.626330  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:27.626339  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:27.626391  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:27.662654  188656 cri.go:89] found id: ""
	I0731 21:02:27.662690  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.662701  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:27.662708  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:27.662770  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:27.699528  188656 cri.go:89] found id: ""
	I0731 21:02:27.699558  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.699566  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:27.699572  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:27.699639  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:27.740501  188656 cri.go:89] found id: ""
	I0731 21:02:27.740528  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.740539  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:27.740547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:27.740613  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:27.778919  188656 cri.go:89] found id: ""
	I0731 21:02:27.778954  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.778966  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:27.778980  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:27.778999  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:27.815475  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:27.815500  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:27.866578  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:27.866615  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:27.880799  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:27.880830  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:27.948987  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:27.949014  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:27.949032  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:30.532314  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:30.546245  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:30.546317  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:30.581736  188656 cri.go:89] found id: ""
	I0731 21:02:30.581763  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.581772  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:30.581778  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:30.581837  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:30.618790  188656 cri.go:89] found id: ""
	I0731 21:02:30.618816  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.618824  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:30.618830  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:30.618886  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:30.654504  188656 cri.go:89] found id: ""
	I0731 21:02:30.654530  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.654538  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:30.654544  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:30.654603  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:30.690570  188656 cri.go:89] found id: ""
	I0731 21:02:30.690598  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.690609  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:30.690617  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:30.690683  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:30.739676  188656 cri.go:89] found id: ""
	I0731 21:02:30.739705  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.739715  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:30.739723  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:30.739789  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:30.777860  188656 cri.go:89] found id: ""
	I0731 21:02:30.777891  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.777902  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:30.777911  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:30.777995  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:30.814036  188656 cri.go:89] found id: ""
	I0731 21:02:30.814073  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.814088  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:30.814096  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:30.814168  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:30.847262  188656 cri.go:89] found id: ""
	I0731 21:02:30.847292  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.847304  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:30.847316  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:30.847338  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:30.898556  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:30.898596  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:30.912940  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:30.912974  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:30.987384  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:30.987405  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:30.987419  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:31.071376  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:31.071416  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:33.613677  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:33.628304  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:33.628380  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:33.662932  188656 cri.go:89] found id: ""
	I0731 21:02:33.662965  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.662977  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:33.662985  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:33.663055  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:33.697445  188656 cri.go:89] found id: ""
	I0731 21:02:33.697477  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.697487  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:33.697493  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:33.697553  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:33.734480  188656 cri.go:89] found id: ""
	I0731 21:02:33.734516  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.734527  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:33.734536  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:33.734614  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:33.770069  188656 cri.go:89] found id: ""
	I0731 21:02:33.770095  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.770104  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:33.770111  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:33.770194  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:33.806315  188656 cri.go:89] found id: ""
	I0731 21:02:33.806341  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.806350  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:33.806356  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:33.806408  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:33.842747  188656 cri.go:89] found id: ""
	I0731 21:02:33.842775  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.842782  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:33.842789  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:33.842856  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:33.877581  188656 cri.go:89] found id: ""
	I0731 21:02:33.877607  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.877616  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:33.877622  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:33.877682  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:33.913238  188656 cri.go:89] found id: ""
	I0731 21:02:33.913263  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.913271  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:33.913282  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:33.913298  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:33.967112  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:33.967148  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:33.980961  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:33.980994  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:34.054886  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:34.054917  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:34.054939  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:34.143088  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:34.143127  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:36.687110  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:36.700649  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:36.700725  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:36.737796  188656 cri.go:89] found id: ""
	I0731 21:02:36.737829  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.737841  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:36.737849  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:36.737916  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:36.773010  188656 cri.go:89] found id: ""
	I0731 21:02:36.773048  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.773059  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:36.773067  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:36.773136  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:36.813945  188656 cri.go:89] found id: ""
	I0731 21:02:36.813978  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.813988  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:36.813994  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:36.814047  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:36.849826  188656 cri.go:89] found id: ""
	I0731 21:02:36.849860  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.849872  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:36.849880  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:36.849943  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:36.887200  188656 cri.go:89] found id: ""
	I0731 21:02:36.887233  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.887244  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:36.887253  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:36.887391  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:36.922529  188656 cri.go:89] found id: ""
	I0731 21:02:36.922562  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.922573  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:36.922582  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:36.922644  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:36.958119  188656 cri.go:89] found id: ""
	I0731 21:02:36.958154  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.958166  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:36.958174  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:36.958240  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:37.001071  188656 cri.go:89] found id: ""
	I0731 21:02:37.001104  188656 logs.go:276] 0 containers: []
	W0731 21:02:37.001113  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:37.001123  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:37.001136  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:37.041248  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:37.041288  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:37.100519  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:37.100558  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:37.115157  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:37.115188  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:37.191232  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:37.191259  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:37.191277  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:39.772834  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:39.788137  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:39.788203  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:39.827329  188656 cri.go:89] found id: ""
	I0731 21:02:39.827361  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.827371  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:39.827378  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:39.827458  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:39.864855  188656 cri.go:89] found id: ""
	I0731 21:02:39.864882  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.864889  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:39.864897  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:39.864958  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:39.901955  188656 cri.go:89] found id: ""
	I0731 21:02:39.901981  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.901990  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:39.901996  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:39.902059  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:39.941376  188656 cri.go:89] found id: ""
	I0731 21:02:39.941402  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.941412  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:39.941418  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:39.941473  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:39.975321  188656 cri.go:89] found id: ""
	I0731 21:02:39.975352  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.975364  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:39.975394  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:39.975465  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:40.010106  188656 cri.go:89] found id: ""
	I0731 21:02:40.010136  188656 logs.go:276] 0 containers: []
	W0731 21:02:40.010148  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:40.010157  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:40.010220  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:40.043963  188656 cri.go:89] found id: ""
	I0731 21:02:40.043997  188656 logs.go:276] 0 containers: []
	W0731 21:02:40.044009  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:40.044017  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:40.044089  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:40.079178  188656 cri.go:89] found id: ""
	I0731 21:02:40.079216  188656 logs.go:276] 0 containers: []
	W0731 21:02:40.079224  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:40.079234  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:40.079248  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:40.141115  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:40.141158  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:40.156722  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:40.156758  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:40.233758  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:40.233782  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:40.233797  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:40.317316  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:40.317375  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:42.858649  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:42.872135  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:42.872221  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:42.911966  188656 cri.go:89] found id: ""
	I0731 21:02:42.911998  188656 logs.go:276] 0 containers: []
	W0731 21:02:42.912007  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:42.912014  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:42.912081  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:42.950036  188656 cri.go:89] found id: ""
	I0731 21:02:42.950070  188656 logs.go:276] 0 containers: []
	W0731 21:02:42.950079  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:42.950085  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:42.950138  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:42.987201  188656 cri.go:89] found id: ""
	I0731 21:02:42.987233  188656 logs.go:276] 0 containers: []
	W0731 21:02:42.987245  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:42.987253  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:42.987326  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:43.027250  188656 cri.go:89] found id: ""
	I0731 21:02:43.027285  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.027297  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:43.027306  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:43.027374  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:43.063419  188656 cri.go:89] found id: ""
	I0731 21:02:43.063448  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.063456  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:43.063463  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:43.063527  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:43.101155  188656 cri.go:89] found id: ""
	I0731 21:02:43.101184  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.101193  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:43.101199  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:43.101249  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:43.142633  188656 cri.go:89] found id: ""
	I0731 21:02:43.142658  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.142667  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:43.142675  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:43.142741  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:43.177747  188656 cri.go:89] found id: ""
	I0731 21:02:43.177780  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.177789  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:43.177799  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:43.177813  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:43.228074  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:43.228114  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:43.242132  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:43.242165  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:43.313026  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:43.313054  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:43.313072  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:43.394620  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:43.394663  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:45.937932  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:45.951871  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:45.951964  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:45.987615  188656 cri.go:89] found id: ""
	I0731 21:02:45.987642  188656 logs.go:276] 0 containers: []
	W0731 21:02:45.987650  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:45.987656  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:45.987715  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:46.022632  188656 cri.go:89] found id: ""
	I0731 21:02:46.022659  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.022667  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:46.022674  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:46.022746  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:46.061153  188656 cri.go:89] found id: ""
	I0731 21:02:46.061182  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.061191  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:46.061196  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:46.061246  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:46.099168  188656 cri.go:89] found id: ""
	I0731 21:02:46.099197  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.099206  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:46.099212  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:46.099266  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:46.137269  188656 cri.go:89] found id: ""
	I0731 21:02:46.137300  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.137312  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:46.137321  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:46.137403  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:46.172330  188656 cri.go:89] found id: ""
	I0731 21:02:46.172391  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.172404  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:46.172417  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:46.172489  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:46.213314  188656 cri.go:89] found id: ""
	I0731 21:02:46.213358  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.213370  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:46.213378  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:46.213451  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:46.248663  188656 cri.go:89] found id: ""
	I0731 21:02:46.248697  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.248707  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:46.248719  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:46.248735  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:46.305433  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:46.305472  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:46.319065  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:46.319098  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:46.387025  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:46.387046  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:46.387058  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:46.476721  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:46.476769  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:49.020882  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:49.036502  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:49.036573  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:49.076478  188656 cri.go:89] found id: ""
	I0731 21:02:49.076509  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.076518  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:49.076525  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:49.076578  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:49.116065  188656 cri.go:89] found id: ""
	I0731 21:02:49.116098  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.116106  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:49.116112  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:49.116168  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:49.153237  188656 cri.go:89] found id: ""
	I0731 21:02:49.153274  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.153287  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:49.153295  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:49.153385  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:49.192821  188656 cri.go:89] found id: ""
	I0731 21:02:49.192849  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.192858  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:49.192864  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:49.192918  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:49.230627  188656 cri.go:89] found id: ""
	I0731 21:02:49.230660  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.230671  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:49.230679  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:49.230749  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:49.266575  188656 cri.go:89] found id: ""
	I0731 21:02:49.266603  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.266611  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:49.266617  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:49.266688  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:49.312489  188656 cri.go:89] found id: ""
	I0731 21:02:49.312522  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.312533  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:49.312541  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:49.312613  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:49.348907  188656 cri.go:89] found id: ""
	I0731 21:02:49.348932  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.348941  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:49.348950  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:49.348965  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:49.363229  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:49.363267  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:49.435708  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:49.435732  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:49.435745  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:49.522002  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:49.522047  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:49.566823  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:49.566868  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:52.122660  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:52.136559  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:52.136629  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:52.173198  188656 cri.go:89] found id: ""
	I0731 21:02:52.173227  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.173236  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:52.173242  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:52.173310  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:52.208464  188656 cri.go:89] found id: ""
	I0731 21:02:52.208503  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.208514  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:52.208521  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:52.208590  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:52.246052  188656 cri.go:89] found id: ""
	I0731 21:02:52.246084  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.246091  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:52.246098  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:52.246160  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:52.281798  188656 cri.go:89] found id: ""
	I0731 21:02:52.281831  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.281843  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:52.281852  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:52.281918  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:52.318924  188656 cri.go:89] found id: ""
	I0731 21:02:52.318954  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.318975  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:52.318983  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:52.319052  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:52.356752  188656 cri.go:89] found id: ""
	I0731 21:02:52.356788  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.356800  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:52.356809  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:52.356874  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:52.391507  188656 cri.go:89] found id: ""
	I0731 21:02:52.391537  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.391545  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:52.391551  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:52.391602  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:52.430714  188656 cri.go:89] found id: ""
	I0731 21:02:52.430749  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.430761  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:52.430774  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:52.430792  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:52.482600  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:52.482629  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:52.535317  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:52.535361  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:52.549835  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:52.549874  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:52.628319  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:52.628347  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:52.628365  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:55.216678  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:55.231142  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:55.231225  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:55.266283  188656 cri.go:89] found id: ""
	I0731 21:02:55.266321  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.266334  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:55.266341  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:55.266399  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:55.301457  188656 cri.go:89] found id: ""
	I0731 21:02:55.301493  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.301506  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:55.301514  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:55.301574  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:55.338427  188656 cri.go:89] found id: ""
	I0731 21:02:55.338453  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.338461  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:55.338467  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:55.338521  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:55.373718  188656 cri.go:89] found id: ""
	I0731 21:02:55.373748  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.373757  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:55.373764  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:55.373846  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:55.410989  188656 cri.go:89] found id: ""
	I0731 21:02:55.411022  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.411034  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:55.411042  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:55.411100  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:55.452867  188656 cri.go:89] found id: ""
	I0731 21:02:55.452904  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.452915  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:55.452924  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:55.452995  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:55.512781  188656 cri.go:89] found id: ""
	I0731 21:02:55.512809  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.512821  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:55.512829  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:55.512894  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:55.550460  188656 cri.go:89] found id: ""
	I0731 21:02:55.550487  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.550495  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:55.550505  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:55.550521  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:55.625776  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:55.625804  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:55.625821  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:55.711276  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:55.711322  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:55.765078  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:55.765111  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:55.818131  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:55.818176  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:58.332914  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:58.346908  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:58.346992  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:58.383641  188656 cri.go:89] found id: ""
	I0731 21:02:58.383686  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.383695  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:58.383700  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:58.383753  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:58.419538  188656 cri.go:89] found id: ""
	I0731 21:02:58.419566  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.419576  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:58.419584  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:58.419649  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:58.457036  188656 cri.go:89] found id: ""
	I0731 21:02:58.457069  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.457080  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:58.457088  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:58.457162  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:58.497596  188656 cri.go:89] found id: ""
	I0731 21:02:58.497621  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.497629  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:58.497635  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:58.497706  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:58.538184  188656 cri.go:89] found id: ""
	I0731 21:02:58.538211  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.538220  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:58.538226  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:58.538291  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:58.584428  188656 cri.go:89] found id: ""
	I0731 21:02:58.584457  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.584468  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:58.584476  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:58.584537  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:58.625052  188656 cri.go:89] found id: ""
	I0731 21:02:58.625084  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.625096  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:58.625103  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:58.625171  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:58.662222  188656 cri.go:89] found id: ""
	I0731 21:02:58.662248  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.662256  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:58.662266  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:58.662278  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:58.740491  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:58.740530  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:58.782685  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:58.782714  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:58.833620  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:58.833668  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:58.848679  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:58.848713  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:58.925496  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:01.426171  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:01.440261  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:01.440341  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:01.477362  188656 cri.go:89] found id: ""
	I0731 21:03:01.477393  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.477405  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:01.477414  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:01.477483  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:01.516640  188656 cri.go:89] found id: ""
	I0731 21:03:01.516675  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.516692  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:01.516701  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:01.516764  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:01.560713  188656 cri.go:89] found id: ""
	I0731 21:03:01.560744  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.560756  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:01.560762  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:01.560844  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:01.604050  188656 cri.go:89] found id: ""
	I0731 21:03:01.604086  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.604097  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:01.604105  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:01.604170  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:01.641358  188656 cri.go:89] found id: ""
	I0731 21:03:01.641391  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.641401  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:01.641406  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:01.641471  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:01.677332  188656 cri.go:89] found id: ""
	I0731 21:03:01.677380  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.677390  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:01.677397  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:01.677459  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:01.713781  188656 cri.go:89] found id: ""
	I0731 21:03:01.713815  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.713826  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:01.713833  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:01.713914  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:01.757499  188656 cri.go:89] found id: ""
	I0731 21:03:01.757543  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.757552  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:01.757563  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:01.757575  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:01.832330  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:01.832370  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:01.832384  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:01.918996  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:01.919050  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:01.979268  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:01.979307  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:02.037528  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:02.037564  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:04.552758  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:04.566881  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:04.566960  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:04.604631  188656 cri.go:89] found id: ""
	I0731 21:03:04.604669  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.604680  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:04.604688  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:04.604791  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:04.644027  188656 cri.go:89] found id: ""
	I0731 21:03:04.644052  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.644061  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:04.644068  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:04.644134  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:04.680010  188656 cri.go:89] found id: ""
	I0731 21:03:04.680037  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.680045  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:04.680050  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:04.680102  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:04.717095  188656 cri.go:89] found id: ""
	I0731 21:03:04.717123  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.717133  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:04.717140  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:04.717212  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:04.755297  188656 cri.go:89] found id: ""
	I0731 21:03:04.755324  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.755331  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:04.755337  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:04.755387  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:04.792073  188656 cri.go:89] found id: ""
	I0731 21:03:04.792104  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.792113  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:04.792119  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:04.792168  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:04.828428  188656 cri.go:89] found id: ""
	I0731 21:03:04.828460  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.828468  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:04.828475  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:04.828541  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:04.863871  188656 cri.go:89] found id: ""
	I0731 21:03:04.863905  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.863916  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:04.863929  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:04.863946  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:04.879591  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:04.879626  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:04.962199  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:04.962227  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:04.962245  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:05.048502  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:05.048547  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:05.090812  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:05.090838  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:07.647307  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:07.664586  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:07.664656  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:07.719851  188656 cri.go:89] found id: ""
	I0731 21:03:07.719887  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.719899  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:07.719908  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:07.719978  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:07.778295  188656 cri.go:89] found id: ""
	I0731 21:03:07.778330  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.778343  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:07.778350  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:07.778417  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:07.817911  188656 cri.go:89] found id: ""
	I0731 21:03:07.817937  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.817947  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:07.817954  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:07.818004  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:07.853177  188656 cri.go:89] found id: ""
	I0731 21:03:07.853211  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.853222  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:07.853229  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:07.853308  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:07.888992  188656 cri.go:89] found id: ""
	I0731 21:03:07.889020  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.889046  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:07.889055  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:07.889133  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:07.924327  188656 cri.go:89] found id: ""
	I0731 21:03:07.924358  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.924369  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:07.924377  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:07.924461  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:07.964438  188656 cri.go:89] found id: ""
	I0731 21:03:07.964470  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.964480  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:07.964489  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:07.964572  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:08.003566  188656 cri.go:89] found id: ""
	I0731 21:03:08.003610  188656 logs.go:276] 0 containers: []
	W0731 21:03:08.003621  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:08.003634  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:08.003651  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:08.044246  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:08.044286  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:08.097479  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:08.097517  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:08.113636  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:08.113663  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:08.187217  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:08.187244  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:08.187261  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:10.771248  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:10.786159  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:10.786232  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:10.823724  188656 cri.go:89] found id: ""
	I0731 21:03:10.823756  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.823769  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:10.823777  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:10.823846  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:10.862440  188656 cri.go:89] found id: ""
	I0731 21:03:10.862468  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.862480  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:10.862488  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:10.862544  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:10.901499  188656 cri.go:89] found id: ""
	I0731 21:03:10.901527  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.901539  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:10.901547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:10.901611  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:10.940255  188656 cri.go:89] found id: ""
	I0731 21:03:10.940279  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.940287  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:10.940293  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:10.940356  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:10.975315  188656 cri.go:89] found id: ""
	I0731 21:03:10.975344  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.975353  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:10.975360  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:10.975420  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:11.011453  188656 cri.go:89] found id: ""
	I0731 21:03:11.011482  188656 logs.go:276] 0 containers: []
	W0731 21:03:11.011538  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:11.011549  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:11.011611  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:11.047846  188656 cri.go:89] found id: ""
	I0731 21:03:11.047887  188656 logs.go:276] 0 containers: []
	W0731 21:03:11.047899  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:11.047907  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:11.047972  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:11.086243  188656 cri.go:89] found id: ""
	I0731 21:03:11.086271  188656 logs.go:276] 0 containers: []
	W0731 21:03:11.086282  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:11.086293  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:11.086309  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:11.139390  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:11.139430  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:11.154637  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:11.154669  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:11.225996  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:11.226019  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:11.226035  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:11.305235  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:11.305280  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:13.845792  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:13.859185  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:13.859261  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:13.896017  188656 cri.go:89] found id: ""
	I0731 21:03:13.896047  188656 logs.go:276] 0 containers: []
	W0731 21:03:13.896055  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:13.896061  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:13.896123  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:13.932442  188656 cri.go:89] found id: ""
	I0731 21:03:13.932475  188656 logs.go:276] 0 containers: []
	W0731 21:03:13.932486  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:13.932494  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:13.932564  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:13.971233  188656 cri.go:89] found id: ""
	I0731 21:03:13.971265  188656 logs.go:276] 0 containers: []
	W0731 21:03:13.971274  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:13.971280  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:13.971331  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:14.009757  188656 cri.go:89] found id: ""
	I0731 21:03:14.009787  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.009796  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:14.009805  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:14.009870  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:14.047946  188656 cri.go:89] found id: ""
	I0731 21:03:14.047979  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.047990  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:14.047998  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:14.048056  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:14.084687  188656 cri.go:89] found id: ""
	I0731 21:03:14.084720  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.084731  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:14.084739  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:14.084805  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:14.124831  188656 cri.go:89] found id: ""
	I0731 21:03:14.124861  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.124870  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:14.124876  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:14.124929  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:14.161242  188656 cri.go:89] found id: ""
	I0731 21:03:14.161275  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.161286  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:14.161295  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:14.161308  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:14.241060  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:14.241115  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:14.282382  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:14.282414  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:14.335201  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:14.335249  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:14.351345  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:14.351379  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:14.436524  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:16.937313  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:16.951403  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:16.951490  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:16.991735  188656 cri.go:89] found id: ""
	I0731 21:03:16.991766  188656 logs.go:276] 0 containers: []
	W0731 21:03:16.991777  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:16.991785  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:16.991852  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:17.030327  188656 cri.go:89] found id: ""
	I0731 21:03:17.030353  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.030360  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:17.030366  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:17.030419  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:17.068161  188656 cri.go:89] found id: ""
	I0731 21:03:17.068195  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.068206  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:17.068214  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:17.068286  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:17.105561  188656 cri.go:89] found id: ""
	I0731 21:03:17.105590  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.105601  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:17.105609  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:17.105684  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:17.144503  188656 cri.go:89] found id: ""
	I0731 21:03:17.144529  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.144540  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:17.144547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:17.144610  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:17.183709  188656 cri.go:89] found id: ""
	I0731 21:03:17.183738  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.183747  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:17.183753  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:17.183815  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:17.222083  188656 cri.go:89] found id: ""
	I0731 21:03:17.222109  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.222117  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:17.222124  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:17.222178  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:17.259503  188656 cri.go:89] found id: ""
	I0731 21:03:17.259534  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.259547  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:17.259561  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:17.259578  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:17.300603  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:17.300642  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:17.352194  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:17.352235  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:17.367179  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:17.367209  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:17.440051  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:17.440074  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:17.440088  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:20.027644  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:20.041735  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:20.041826  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:20.077436  188656 cri.go:89] found id: ""
	I0731 21:03:20.077470  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.077483  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:20.077491  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:20.077558  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:20.117420  188656 cri.go:89] found id: ""
	I0731 21:03:20.117449  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.117459  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:20.117466  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:20.117533  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:20.157794  188656 cri.go:89] found id: ""
	I0731 21:03:20.157827  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.157838  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:20.157847  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:20.157914  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:20.193760  188656 cri.go:89] found id: ""
	I0731 21:03:20.193788  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.193796  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:20.193803  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:20.193856  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:20.231731  188656 cri.go:89] found id: ""
	I0731 21:03:20.231764  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.231777  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:20.231785  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:20.231856  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:20.268666  188656 cri.go:89] found id: ""
	I0731 21:03:20.268697  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.268709  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:20.268717  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:20.268786  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:20.304355  188656 cri.go:89] found id: ""
	I0731 21:03:20.304392  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.304406  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:20.304414  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:20.304478  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:20.343886  188656 cri.go:89] found id: ""
	I0731 21:03:20.343915  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.343927  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:20.343940  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:20.343957  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:20.358460  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:20.358494  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:20.435473  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:20.435499  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:20.435522  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:20.517961  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:20.518002  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:20.561528  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:20.561567  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:23.119570  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:23.134276  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:23.134366  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:23.172808  188656 cri.go:89] found id: ""
	I0731 21:03:23.172837  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.172846  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:23.172852  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:23.172914  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:23.208038  188656 cri.go:89] found id: ""
	I0731 21:03:23.208067  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.208080  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:23.208086  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:23.208140  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:23.244493  188656 cri.go:89] found id: ""
	I0731 21:03:23.244523  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.244533  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:23.244539  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:23.244605  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:23.280474  188656 cri.go:89] found id: ""
	I0731 21:03:23.280503  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.280510  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:23.280517  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:23.280581  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:23.317381  188656 cri.go:89] found id: ""
	I0731 21:03:23.317415  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.317428  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:23.317441  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:23.317511  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:23.357023  188656 cri.go:89] found id: ""
	I0731 21:03:23.357051  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.357062  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:23.357071  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:23.357134  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:23.400176  188656 cri.go:89] found id: ""
	I0731 21:03:23.400211  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.400223  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:23.400230  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:23.400298  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:23.440157  188656 cri.go:89] found id: ""
	I0731 21:03:23.440190  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.440201  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:23.440213  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:23.440234  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:23.494762  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:23.494802  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:23.511463  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:23.511510  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:23.600359  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:23.600383  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:23.600403  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:23.682683  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:23.682723  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:26.225923  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:26.245708  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:26.245791  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:26.282882  188656 cri.go:89] found id: ""
	I0731 21:03:26.282910  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.282920  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:26.282928  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:26.282987  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:26.324227  188656 cri.go:89] found id: ""
	I0731 21:03:26.324268  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.324279  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:26.324287  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:26.324349  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:26.365996  188656 cri.go:89] found id: ""
	I0731 21:03:26.366027  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.366038  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:26.366047  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:26.366119  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:26.403790  188656 cri.go:89] found id: ""
	I0731 21:03:26.403823  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.403835  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:26.403844  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:26.403915  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:26.442924  188656 cri.go:89] found id: ""
	I0731 21:03:26.442947  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.442957  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:26.442964  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:26.443026  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:26.482260  188656 cri.go:89] found id: ""
	I0731 21:03:26.482286  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.482294  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:26.482300  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:26.482364  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:26.526385  188656 cri.go:89] found id: ""
	I0731 21:03:26.526420  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.526432  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:26.526442  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:26.526511  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:26.565217  188656 cri.go:89] found id: ""
	I0731 21:03:26.565250  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.565262  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:26.565275  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:26.565294  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:26.623437  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:26.623478  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:26.639642  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:26.639683  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:26.720274  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:26.720309  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:26.720325  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:26.799689  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:26.799728  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:29.351214  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:29.365487  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:29.365561  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:29.402989  188656 cri.go:89] found id: ""
	I0731 21:03:29.403015  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.403022  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:29.403028  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:29.403079  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:29.443276  188656 cri.go:89] found id: ""
	I0731 21:03:29.443310  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.443321  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:29.443329  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:29.443397  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:29.483285  188656 cri.go:89] found id: ""
	I0731 21:03:29.483311  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.483319  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:29.483326  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:29.483384  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:29.522285  188656 cri.go:89] found id: ""
	I0731 21:03:29.522317  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.522329  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:29.522337  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:29.522406  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:29.565115  188656 cri.go:89] found id: ""
	I0731 21:03:29.565145  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.565155  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:29.565163  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:29.565233  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:29.603768  188656 cri.go:89] found id: ""
	I0731 21:03:29.603805  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.603816  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:29.603822  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:29.603875  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:29.640380  188656 cri.go:89] found id: ""
	I0731 21:03:29.640406  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.640416  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:29.640424  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:29.640493  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:29.679699  188656 cri.go:89] found id: ""
	I0731 21:03:29.679727  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.679736  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:29.679749  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:29.679764  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:29.735555  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:29.735603  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:29.749670  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:29.749708  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:29.825950  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:29.825973  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:29.825989  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:29.915420  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:29.915463  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:32.462996  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:32.478659  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:32.478739  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:32.528625  188656 cri.go:89] found id: ""
	I0731 21:03:32.528651  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.528659  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:32.528665  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:32.528724  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:32.574371  188656 cri.go:89] found id: ""
	I0731 21:03:32.574399  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.574408  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:32.574414  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:32.574474  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:32.616916  188656 cri.go:89] found id: ""
	I0731 21:03:32.616960  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.616970  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:32.616975  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:32.617040  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:32.657725  188656 cri.go:89] found id: ""
	I0731 21:03:32.657758  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.657769  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:32.657777  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:32.657842  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:32.693197  188656 cri.go:89] found id: ""
	I0731 21:03:32.693226  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.693237  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:32.693245  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:32.693316  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:32.733567  188656 cri.go:89] found id: ""
	I0731 21:03:32.733594  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.733602  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:32.733608  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:32.733670  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:32.774624  188656 cri.go:89] found id: ""
	I0731 21:03:32.774659  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.774671  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:32.774679  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:32.774747  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:32.811755  188656 cri.go:89] found id: ""
	I0731 21:03:32.811790  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.811809  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:32.811822  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:32.811835  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:32.825512  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:32.825544  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:32.902310  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:32.902339  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:32.902366  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:32.983347  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:32.983391  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:33.028037  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:33.028068  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:35.582896  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:35.597483  188656 kubeadm.go:597] duration metric: took 4m3.860422558s to restartPrimaryControlPlane
	W0731 21:03:35.597559  188656 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 21:03:35.597598  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:03:36.054326  188656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:03:36.070199  188656 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:03:36.081882  188656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:03:36.093300  188656 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:03:36.093322  188656 kubeadm.go:157] found existing configuration files:
	
	I0731 21:03:36.093396  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:03:36.103781  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:03:36.103843  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:03:36.114702  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:03:36.125213  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:03:36.125299  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:03:36.136299  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:03:36.146441  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:03:36.146520  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:03:36.157524  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:03:36.168247  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:03:36.168327  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:03:36.178875  188656 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:03:36.253662  188656 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 21:03:36.253804  188656 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:03:36.401385  188656 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:03:36.401550  188656 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:03:36.401686  188656 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:03:36.591601  188656 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:03:36.593492  188656 out.go:204]   - Generating certificates and keys ...
	I0731 21:03:36.593604  188656 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:03:36.593690  188656 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:03:36.593817  188656 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:03:36.593907  188656 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:03:36.594011  188656 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:03:36.594090  188656 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:03:36.594215  188656 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:03:36.594602  188656 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:03:36.595122  188656 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:03:36.595323  188656 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:03:36.595414  188656 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:03:36.595548  188656 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:03:37.052958  188656 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:03:37.178980  188656 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:03:37.375085  188656 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:03:37.550735  188656 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:03:37.571991  188656 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:03:37.575050  188656 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:03:37.575227  188656 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:03:37.707194  188656 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:03:37.709295  188656 out.go:204]   - Booting up control plane ...
	I0731 21:03:37.709427  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:03:37.722549  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:03:37.723455  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:03:37.724194  188656 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:03:37.726323  188656 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 21:04:17.729291  188656 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 21:04:17.730290  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:04:17.730512  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:04:22.731353  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:04:22.731627  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:04:32.732572  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:04:32.732835  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:04:52.734257  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:04:52.734530  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:05:32.739465  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:05:32.739778  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:05:32.739796  188656 kubeadm.go:310] 
	I0731 21:05:32.739854  188656 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 21:05:32.739962  188656 kubeadm.go:310] 		timed out waiting for the condition
	I0731 21:05:32.739988  188656 kubeadm.go:310] 
	I0731 21:05:32.740034  188656 kubeadm.go:310] 	This error is likely caused by:
	I0731 21:05:32.740083  188656 kubeadm.go:310] 		- The kubelet is not running
	I0731 21:05:32.740230  188656 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 21:05:32.740245  188656 kubeadm.go:310] 
	I0731 21:05:32.740393  188656 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 21:05:32.740441  188656 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 21:05:32.740485  188656 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 21:05:32.740494  188656 kubeadm.go:310] 
	I0731 21:05:32.740624  188656 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 21:05:32.740741  188656 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 21:05:32.740752  188656 kubeadm.go:310] 
	I0731 21:05:32.740888  188656 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 21:05:32.741008  188656 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 21:05:32.741084  188656 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 21:05:32.741145  188656 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 21:05:32.741152  188656 kubeadm.go:310] 
	I0731 21:05:32.741834  188656 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:05:32.741967  188656 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 21:05:32.742066  188656 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0731 21:05:32.742264  188656 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0731 21:05:32.742340  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:05:33.227380  188656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:05:33.243864  188656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:05:33.254208  188656 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:05:33.254234  188656 kubeadm.go:157] found existing configuration files:
	
	I0731 21:05:33.254313  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:05:33.264766  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:05:33.264846  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:05:33.275517  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:05:33.286281  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:05:33.286358  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:05:33.297108  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:05:33.307555  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:05:33.307627  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:05:33.318193  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:05:33.328155  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:05:33.328220  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:05:33.338088  188656 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:05:33.569897  188656 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:07:29.725230  188656 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 21:07:29.725381  188656 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 21:07:29.726868  188656 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 21:07:29.726959  188656 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:07:29.727064  188656 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:07:29.727204  188656 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:07:29.727322  188656 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:07:29.727389  188656 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:07:29.729525  188656 out.go:204]   - Generating certificates and keys ...
	I0731 21:07:29.729659  188656 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:07:29.729761  188656 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:07:29.729918  188656 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:07:29.730026  188656 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:07:29.730126  188656 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:07:29.730268  188656 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:07:29.730369  188656 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:07:29.730461  188656 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:07:29.730555  188656 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:07:29.730658  188656 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:07:29.730713  188656 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:07:29.730790  188656 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:07:29.730856  188656 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:07:29.730931  188656 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:07:29.731014  188656 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:07:29.731111  188656 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:07:29.731248  188656 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:07:29.731339  188656 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:07:29.731395  188656 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:07:29.731486  188656 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:07:29.733052  188656 out.go:204]   - Booting up control plane ...
	I0731 21:07:29.733146  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:07:29.733226  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:07:29.733305  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:07:29.733454  188656 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:07:29.733656  188656 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 21:07:29.733735  188656 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 21:07:29.733830  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.734048  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.734116  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.734275  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.734331  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.734543  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.734642  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.734868  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.734966  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.735234  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.735252  188656 kubeadm.go:310] 
	I0731 21:07:29.735313  188656 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 21:07:29.735376  188656 kubeadm.go:310] 		timed out waiting for the condition
	I0731 21:07:29.735385  188656 kubeadm.go:310] 
	I0731 21:07:29.735432  188656 kubeadm.go:310] 	This error is likely caused by:
	I0731 21:07:29.735480  188656 kubeadm.go:310] 		- The kubelet is not running
	I0731 21:07:29.735624  188656 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 21:07:29.735634  188656 kubeadm.go:310] 
	I0731 21:07:29.735779  188656 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 21:07:29.735830  188656 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 21:07:29.735879  188656 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 21:07:29.735889  188656 kubeadm.go:310] 
	I0731 21:07:29.736038  188656 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 21:07:29.736129  188656 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 21:07:29.736141  188656 kubeadm.go:310] 
	I0731 21:07:29.736241  188656 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 21:07:29.736315  188656 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 21:07:29.736400  188656 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 21:07:29.736480  188656 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 21:07:29.736537  188656 kubeadm.go:310] 
	I0731 21:07:29.736579  188656 kubeadm.go:394] duration metric: took 7m58.053099483s to StartCluster
	I0731 21:07:29.736660  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:07:29.736793  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:07:29.802897  188656 cri.go:89] found id: ""
	I0731 21:07:29.802932  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.802945  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:07:29.802953  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:07:29.803021  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:07:29.840059  188656 cri.go:89] found id: ""
	I0731 21:07:29.840088  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.840098  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:07:29.840106  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:07:29.840178  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:07:29.881030  188656 cri.go:89] found id: ""
	I0731 21:07:29.881058  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.881066  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:07:29.881073  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:07:29.881150  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:07:29.923495  188656 cri.go:89] found id: ""
	I0731 21:07:29.923524  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.923532  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:07:29.923538  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:07:29.923604  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:07:29.966128  188656 cri.go:89] found id: ""
	I0731 21:07:29.966156  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.966164  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:07:29.966171  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:07:29.966236  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:07:30.007648  188656 cri.go:89] found id: ""
	I0731 21:07:30.007678  188656 logs.go:276] 0 containers: []
	W0731 21:07:30.007687  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:07:30.007693  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:07:30.007748  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:07:30.047857  188656 cri.go:89] found id: ""
	I0731 21:07:30.047887  188656 logs.go:276] 0 containers: []
	W0731 21:07:30.047903  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:07:30.047909  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:07:30.047959  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:07:30.087245  188656 cri.go:89] found id: ""
	I0731 21:07:30.087275  188656 logs.go:276] 0 containers: []
	W0731 21:07:30.087283  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:07:30.087294  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:07:30.087308  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:07:30.168205  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:07:30.168235  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:07:30.168256  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:07:30.276908  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:07:30.276951  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:07:30.322993  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:07:30.323030  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:07:30.375237  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:07:30.375287  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0731 21:07:30.392523  188656 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0731 21:07:30.392579  188656 out.go:239] * 
	* 
	W0731 21:07:30.392653  188656 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:07:30.392683  188656 out.go:239] * 
	* 
	W0731 21:07:30.393845  188656 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 21:07:30.397498  188656 out.go:177] 
	W0731 21:07:30.398890  188656 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:07:30.398959  188656 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0731 21:07:30.398995  188656 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0731 21:07:30.401295  188656 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-239115 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-239115 -n old-k8s-version-239115
E0731 21:07:30.947776  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/custom-flannel-341849/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-239115 -n old-k8s-version-239115: exit status 2 (236.379156ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-239115 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-239115 logs -n 25: (1.710168361s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-341849 sudo                                  | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC |                     |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-341849 sudo                                  | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-831240                                  | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:50 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| ssh     | -p bridge-341849 sudo                                  | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-341849 sudo find                             | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-341849 sudo crio                             | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-341849                                       | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-248084 | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | disable-driver-mounts-248084                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:51 UTC |
	|         | default-k8s-diff-port-125614                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-831240            | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC | 31 Jul 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-831240                                  | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-916885             | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC | 31 Jul 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-916885                                   | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-125614  | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC | 31 Jul 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC |                     |
	|         | default-k8s-diff-port-125614                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-239115        | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-831240                 | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-831240                                  | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC | 31 Jul 24 21:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-916885                  | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-916885 --memory=2200                     | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 20:54 UTC | 31 Jul 24 21:04 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-125614       | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:54 UTC | 31 Jul 24 21:03 UTC |
	|         | default-k8s-diff-port-125614                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-239115                              | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:55 UTC | 31 Jul 24 20:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-239115             | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:55 UTC | 31 Jul 24 20:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-239115                              | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 20:55:13
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 20:55:13.835355  188656 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:55:13.835514  188656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:55:13.835525  188656 out.go:304] Setting ErrFile to fd 2...
	I0731 20:55:13.835531  188656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:55:13.835717  188656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 20:55:13.836233  188656 out.go:298] Setting JSON to false
	I0731 20:55:13.837146  188656 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9450,"bootTime":1722449864,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 20:55:13.837206  188656 start.go:139] virtualization: kvm guest
	I0731 20:55:13.839094  188656 out.go:177] * [old-k8s-version-239115] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 20:55:13.840630  188656 notify.go:220] Checking for updates...
	I0731 20:55:13.840638  188656 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 20:55:13.841884  188656 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 20:55:13.843054  188656 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 20:55:13.844295  188656 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 20:55:13.845348  188656 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 20:55:13.846480  188656 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 20:55:13.847974  188656 config.go:182] Loaded profile config "old-k8s-version-239115": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 20:55:13.848349  188656 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:55:13.848390  188656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:55:13.863017  188656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46065
	I0731 20:55:13.863418  188656 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:55:13.863927  188656 main.go:141] libmachine: Using API Version  1
	I0731 20:55:13.863980  188656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:55:13.864357  188656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:55:13.864625  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:55:13.866178  188656 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 20:55:13.867248  188656 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 20:55:13.867523  188656 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:55:13.867552  188656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:55:13.881922  188656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44705
	I0731 20:55:13.882304  188656 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:55:13.882707  188656 main.go:141] libmachine: Using API Version  1
	I0731 20:55:13.882729  188656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:55:13.883037  188656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:55:13.883214  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:55:13.917067  188656 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 20:55:13.918247  188656 start.go:297] selected driver: kvm2
	I0731 20:55:13.918260  188656 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-239115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:55:13.918396  188656 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 20:55:13.919323  188656 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:55:13.919428  188656 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-121704/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 20:55:13.934150  188656 install.go:137] /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0731 20:55:13.934506  188656 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 20:55:13.934569  188656 cni.go:84] Creating CNI manager for ""
	I0731 20:55:13.934583  188656 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:55:13.934630  188656 start.go:340] cluster config:
	{Name:old-k8s-version-239115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239115 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:55:13.934737  188656 iso.go:125] acquiring lock: {Name:mk69fc0fe37180b19ce91307b5e12ab2d0bd69fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:55:13.936401  188656 out.go:177] * Starting "old-k8s-version-239115" primary control-plane node in "old-k8s-version-239115" cluster
	I0731 20:55:13.769565  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:13.937700  188656 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 20:55:13.937735  188656 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0731 20:55:13.937743  188656 cache.go:56] Caching tarball of preloaded images
	I0731 20:55:13.937806  188656 preload.go:172] Found /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 20:55:13.937816  188656 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0731 20:55:13.937907  188656 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/config.json ...
	I0731 20:55:13.938068  188656 start.go:360] acquireMachinesLock for old-k8s-version-239115: {Name:mk0ee20c9dba367bb5e62f6affdfe6f589095d2a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 20:55:19.845616  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:22.917614  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:28.997601  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:32.069596  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:38.149607  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:41.221579  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:47.301587  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:50.373695  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:56.453611  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:59.525649  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:05.605640  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:08.677654  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:14.757599  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:17.829627  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:23.909581  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:26.981613  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:33.061611  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:36.133597  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:42.213638  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:45.285703  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:51.365653  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:54.437615  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:00.517627  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:03.589595  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:09.669666  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:12.741661  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:18.821643  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:21.893594  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:27.973636  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:31.045651  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:37.125619  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:40.197656  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:46.277679  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:49.349535  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:55.429634  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:58.501611  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:58:04.581620  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:58:07.653642  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:58:13.733571  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:58:16.805674  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:58:19.809697  188133 start.go:364] duration metric: took 4m15.439364065s to acquireMachinesLock for "no-preload-916885"
	I0731 20:58:19.809748  188133 start.go:96] Skipping create...Using existing machine configuration
	I0731 20:58:19.809756  188133 fix.go:54] fixHost starting: 
	I0731 20:58:19.810113  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:58:19.810149  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:58:19.825131  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40671
	I0731 20:58:19.825615  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:58:19.826110  188133 main.go:141] libmachine: Using API Version  1
	I0731 20:58:19.826132  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:58:19.826439  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:58:19.826616  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:19.826840  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetState
	I0731 20:58:19.828267  188133 fix.go:112] recreateIfNeeded on no-preload-916885: state=Stopped err=<nil>
	I0731 20:58:19.828294  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	W0731 20:58:19.828471  188133 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 20:58:19.829957  188133 out.go:177] * Restarting existing kvm2 VM for "no-preload-916885" ...
	I0731 20:58:19.807506  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:58:19.807579  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetMachineName
	I0731 20:58:19.807919  187862 buildroot.go:166] provisioning hostname "embed-certs-831240"
	I0731 20:58:19.807946  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetMachineName
	I0731 20:58:19.808126  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:58:19.809580  187862 machine.go:97] duration metric: took 4m37.431426503s to provisionDockerMachine
	I0731 20:58:19.809625  187862 fix.go:56] duration metric: took 4m37.4520345s for fixHost
	I0731 20:58:19.809631  187862 start.go:83] releasing machines lock for "embed-certs-831240", held for 4m37.452053341s
	W0731 20:58:19.809664  187862 start.go:714] error starting host: provision: host is not running
	W0731 20:58:19.809893  187862 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0731 20:58:19.809916  187862 start.go:729] Will try again in 5 seconds ...
	I0731 20:58:19.831221  188133 main.go:141] libmachine: (no-preload-916885) Calling .Start
	I0731 20:58:19.831409  188133 main.go:141] libmachine: (no-preload-916885) Ensuring networks are active...
	I0731 20:58:19.832210  188133 main.go:141] libmachine: (no-preload-916885) Ensuring network default is active
	I0731 20:58:19.832536  188133 main.go:141] libmachine: (no-preload-916885) Ensuring network mk-no-preload-916885 is active
	I0731 20:58:19.832885  188133 main.go:141] libmachine: (no-preload-916885) Getting domain xml...
	I0731 20:58:19.833563  188133 main.go:141] libmachine: (no-preload-916885) Creating domain...
	I0731 20:58:21.031310  188133 main.go:141] libmachine: (no-preload-916885) Waiting to get IP...
	I0731 20:58:21.032067  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:21.032519  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:21.032626  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:21.032509  189287 retry.go:31] will retry after 207.547113ms: waiting for machine to come up
	I0731 20:58:21.242229  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:21.242716  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:21.242797  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:21.242683  189287 retry.go:31] will retry after 307.483232ms: waiting for machine to come up
	I0731 20:58:21.552437  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:21.552954  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:21.552977  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:21.552911  189287 retry.go:31] will retry after 441.063904ms: waiting for machine to come up
	I0731 20:58:21.995514  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:21.995860  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:21.995903  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:21.995813  189287 retry.go:31] will retry after 596.915537ms: waiting for machine to come up
	I0731 20:58:22.594563  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:22.595037  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:22.595079  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:22.594988  189287 retry.go:31] will retry after 471.207023ms: waiting for machine to come up
	I0731 20:58:23.067499  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:23.067926  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:23.067950  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:23.067899  189287 retry.go:31] will retry after 756.851428ms: waiting for machine to come up
	I0731 20:58:23.826869  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:23.827277  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:23.827305  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:23.827232  189287 retry.go:31] will retry after 981.303239ms: waiting for machine to come up
	I0731 20:58:24.810830  187862 start.go:360] acquireMachinesLock for embed-certs-831240: {Name:mk0ee20c9dba367bb5e62f6affdfe6f589095d2a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 20:58:24.810239  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:24.810615  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:24.810651  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:24.810584  189287 retry.go:31] will retry after 1.18169902s: waiting for machine to come up
	I0731 20:58:25.994320  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:25.994700  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:25.994728  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:25.994635  189287 retry.go:31] will retry after 1.781207961s: waiting for machine to come up
	I0731 20:58:27.778381  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:27.778764  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:27.778805  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:27.778734  189287 retry.go:31] will retry after 1.885603462s: waiting for machine to come up
	I0731 20:58:29.665633  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:29.666049  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:29.666070  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:29.666026  189287 retry.go:31] will retry after 2.664379174s: waiting for machine to come up
	I0731 20:58:32.333226  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:32.333615  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:32.333644  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:32.333594  189287 retry.go:31] will retry after 2.932420774s: waiting for machine to come up
	I0731 20:58:35.267165  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:35.267527  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:35.267558  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:35.267496  189287 retry.go:31] will retry after 4.378841892s: waiting for machine to come up
	I0731 20:58:41.010483  188266 start.go:364] duration metric: took 4m25.11688001s to acquireMachinesLock for "default-k8s-diff-port-125614"
	I0731 20:58:41.010557  188266 start.go:96] Skipping create...Using existing machine configuration
	I0731 20:58:41.010566  188266 fix.go:54] fixHost starting: 
	I0731 20:58:41.010992  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:58:41.011033  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:58:41.030450  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41629
	I0731 20:58:41.030910  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:58:41.031360  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:58:41.031382  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:58:41.031703  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:58:41.031859  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:58:41.032020  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetState
	I0731 20:58:41.033653  188266 fix.go:112] recreateIfNeeded on default-k8s-diff-port-125614: state=Stopped err=<nil>
	I0731 20:58:41.033695  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	W0731 20:58:41.033872  188266 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 20:58:41.035898  188266 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-125614" ...
	I0731 20:58:39.650969  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.651458  188133 main.go:141] libmachine: (no-preload-916885) Found IP for machine: 192.168.72.239
	I0731 20:58:39.651475  188133 main.go:141] libmachine: (no-preload-916885) Reserving static IP address...
	I0731 20:58:39.651516  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has current primary IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.651957  188133 main.go:141] libmachine: (no-preload-916885) Reserved static IP address: 192.168.72.239
	I0731 20:58:39.651995  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "no-preload-916885", mac: "52:54:00:46:b1:6a", ip: "192.168.72.239"} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:39.652023  188133 main.go:141] libmachine: (no-preload-916885) Waiting for SSH to be available...
	I0731 20:58:39.652054  188133 main.go:141] libmachine: (no-preload-916885) DBG | skip adding static IP to network mk-no-preload-916885 - found existing host DHCP lease matching {name: "no-preload-916885", mac: "52:54:00:46:b1:6a", ip: "192.168.72.239"}
	I0731 20:58:39.652073  188133 main.go:141] libmachine: (no-preload-916885) DBG | Getting to WaitForSSH function...
	I0731 20:58:39.654095  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.654450  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:39.654479  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.654636  188133 main.go:141] libmachine: (no-preload-916885) DBG | Using SSH client type: external
	I0731 20:58:39.654659  188133 main.go:141] libmachine: (no-preload-916885) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa (-rw-------)
	I0731 20:58:39.654714  188133 main.go:141] libmachine: (no-preload-916885) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.239 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:58:39.654729  188133 main.go:141] libmachine: (no-preload-916885) DBG | About to run SSH command:
	I0731 20:58:39.654768  188133 main.go:141] libmachine: (no-preload-916885) DBG | exit 0
	I0731 20:58:39.781409  188133 main.go:141] libmachine: (no-preload-916885) DBG | SSH cmd err, output: <nil>: 
	I0731 20:58:39.781741  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetConfigRaw
	I0731 20:58:39.782349  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetIP
	I0731 20:58:39.784813  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.785234  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:39.785266  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.785643  188133 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/config.json ...
	I0731 20:58:39.785859  188133 machine.go:94] provisionDockerMachine start ...
	I0731 20:58:39.785879  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:39.786095  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:39.788573  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.788840  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:39.788868  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.789025  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:39.789203  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:39.789374  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:39.789495  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:39.789661  188133 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:39.789927  188133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.239 22 <nil> <nil>}
	I0731 20:58:39.789941  188133 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 20:58:39.901661  188133 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 20:58:39.901687  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetMachineName
	I0731 20:58:39.901920  188133 buildroot.go:166] provisioning hostname "no-preload-916885"
	I0731 20:58:39.901953  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetMachineName
	I0731 20:58:39.902142  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:39.904763  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.905159  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:39.905186  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.905347  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:39.905534  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:39.905698  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:39.905822  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:39.905977  188133 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:39.906137  188133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.239 22 <nil> <nil>}
	I0731 20:58:39.906155  188133 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-916885 && echo "no-preload-916885" | sudo tee /etc/hostname
	I0731 20:58:40.030955  188133 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-916885
	
	I0731 20:58:40.030979  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.033905  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.034254  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.034276  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.034487  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:40.034693  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.034868  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.035014  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:40.035197  188133 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:40.035373  188133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.239 22 <nil> <nil>}
	I0731 20:58:40.035392  188133 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-916885' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-916885/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-916885' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:58:40.154331  188133 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:58:40.154381  188133 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 20:58:40.154436  188133 buildroot.go:174] setting up certificates
	I0731 20:58:40.154452  188133 provision.go:84] configureAuth start
	I0731 20:58:40.154474  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetMachineName
	I0731 20:58:40.154813  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetIP
	I0731 20:58:40.157702  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.158053  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.158075  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.158218  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.160715  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.161030  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.161048  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.161186  188133 provision.go:143] copyHostCerts
	I0731 20:58:40.161258  188133 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 20:58:40.161267  188133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 20:58:40.161372  188133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 20:58:40.161477  188133 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 20:58:40.161487  188133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 20:58:40.161520  188133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 20:58:40.161590  188133 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 20:58:40.161606  188133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 20:58:40.161639  188133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 20:58:40.161700  188133 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.no-preload-916885 san=[127.0.0.1 192.168.72.239 localhost minikube no-preload-916885]
	I0731 20:58:40.341529  188133 provision.go:177] copyRemoteCerts
	I0731 20:58:40.341586  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:58:40.341612  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.344557  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.344851  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.344871  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.345080  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:40.345266  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.345432  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:40.345677  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 20:58:40.431395  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:58:40.455012  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 20:58:40.477721  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 20:58:40.500174  188133 provision.go:87] duration metric: took 345.705192ms to configureAuth
	I0731 20:58:40.500203  188133 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:58:40.500377  188133 config.go:182] Loaded profile config "no-preload-916885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 20:58:40.500462  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.503077  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.503438  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.503467  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.503586  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:40.503780  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.503947  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.504065  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:40.504245  188133 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:40.504467  188133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.239 22 <nil> <nil>}
	I0731 20:58:40.504489  188133 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:58:40.765409  188133 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:58:40.765448  188133 machine.go:97] duration metric: took 979.574417ms to provisionDockerMachine
	I0731 20:58:40.765460  188133 start.go:293] postStartSetup for "no-preload-916885" (driver="kvm2")
	I0731 20:58:40.765474  188133 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:58:40.765525  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:40.765895  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:58:40.765928  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.768314  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.768610  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.768657  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.768760  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:40.768926  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.769089  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:40.769199  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 20:58:40.855821  188133 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:58:40.860032  188133 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:58:40.860071  188133 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 20:58:40.860148  188133 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 20:58:40.860251  188133 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 20:58:40.860367  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:58:40.869291  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:58:40.892945  188133 start.go:296] duration metric: took 127.469545ms for postStartSetup
	I0731 20:58:40.892991  188133 fix.go:56] duration metric: took 21.083232755s for fixHost
	I0731 20:58:40.893019  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.895784  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.896166  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.896197  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.896316  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:40.896501  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.896654  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.896777  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:40.896964  188133 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:40.897133  188133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.239 22 <nil> <nil>}
	I0731 20:58:40.897143  188133 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:58:41.010330  188133 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722459520.969906971
	
	I0731 20:58:41.010352  188133 fix.go:216] guest clock: 1722459520.969906971
	I0731 20:58:41.010360  188133 fix.go:229] Guest: 2024-07-31 20:58:40.969906971 +0000 UTC Remote: 2024-07-31 20:58:40.892995844 +0000 UTC m=+276.656012666 (delta=76.911127ms)
	I0731 20:58:41.010390  188133 fix.go:200] guest clock delta is within tolerance: 76.911127ms
	I0731 20:58:41.010396  188133 start.go:83] releasing machines lock for "no-preload-916885", held for 21.200662427s
	I0731 20:58:41.010429  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:41.010733  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetIP
	I0731 20:58:41.013519  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.013841  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:41.013867  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.014034  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:41.014637  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:41.014829  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:41.014914  188133 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:58:41.014974  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:41.015051  188133 ssh_runner.go:195] Run: cat /version.json
	I0731 20:58:41.015074  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:41.017813  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.017837  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.018170  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:41.018205  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:41.018225  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.018239  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.018482  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:41.018493  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:41.018678  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:41.018694  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:41.018862  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:41.018885  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:41.018965  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 20:58:41.019040  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 20:58:41.107999  188133 ssh_runner.go:195] Run: systemctl --version
	I0731 20:58:41.133039  188133 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:58:41.279485  188133 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:58:41.285765  188133 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:58:41.285838  188133 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:58:41.302175  188133 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:58:41.302203  188133 start.go:495] detecting cgroup driver to use...
	I0731 20:58:41.302280  188133 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:58:41.319896  188133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:58:41.334618  188133 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:58:41.334689  188133 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:58:41.348292  188133 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:58:41.363968  188133 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:58:41.472992  188133 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:58:41.605581  188133 docker.go:233] disabling docker service ...
	I0731 20:58:41.605669  188133 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:58:41.620414  188133 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:58:41.632951  188133 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:58:41.783942  188133 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:58:41.912311  188133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:58:41.931076  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:58:41.954672  188133 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0731 20:58:41.954752  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:41.967478  188133 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:58:41.967567  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:41.978990  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:41.991689  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:42.003168  188133 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:58:42.019114  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:42.034607  188133 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:42.057543  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:42.070420  188133 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:58:42.081173  188133 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:58:42.081245  188133 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:58:42.095455  188133 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:58:42.106943  188133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:58:42.221724  188133 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:58:42.375966  188133 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:58:42.376051  188133 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:58:42.381473  188133 start.go:563] Will wait 60s for crictl version
	I0731 20:58:42.381548  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:42.385364  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:58:42.426783  188133 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:58:42.426872  188133 ssh_runner.go:195] Run: crio --version
	I0731 20:58:42.459096  188133 ssh_runner.go:195] Run: crio --version
	I0731 20:58:42.490043  188133 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0731 20:58:42.491578  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetIP
	I0731 20:58:42.494915  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:42.495289  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:42.495310  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:42.495610  188133 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0731 20:58:42.500266  188133 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:58:42.515164  188133 kubeadm.go:883] updating cluster {Name:no-preload-916885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-916885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.239 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:58:42.515295  188133 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 20:58:42.515332  188133 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:58:42.551930  188133 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0731 20:58:42.551961  188133 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 20:58:42.552025  188133 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:58:42.552047  188133 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0731 20:58:42.552067  188133 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 20:58:42.552087  188133 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 20:58:42.552071  188133 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 20:58:42.552028  188133 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 20:58:42.552129  188133 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 20:58:42.552035  188133 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0731 20:58:42.554026  188133 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 20:58:42.554044  188133 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 20:58:42.554103  188133 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0731 20:58:42.554112  188133 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0731 20:58:42.554123  188133 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:58:42.554030  188133 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 20:58:42.554032  188133 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 20:58:42.554027  188133 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 20:58:42.721659  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0731 20:58:42.743910  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0731 20:58:42.750941  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0731 20:58:42.772074  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 20:58:42.781921  188133 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0731 20:58:42.781964  188133 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 20:58:42.782014  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:42.793926  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 20:58:42.813112  188133 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0731 20:58:42.813154  188133 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0731 20:58:42.813202  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:42.916544  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 20:58:42.937647  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 20:58:42.948145  188133 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0731 20:58:42.948194  188133 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 20:58:42.948208  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0731 20:58:42.948237  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:42.948268  188133 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0731 20:58:42.948300  188133 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 20:58:42.948338  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0731 20:58:42.948341  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:43.006187  188133 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0731 20:58:43.006238  188133 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 20:58:43.006295  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:43.045484  188133 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0731 20:58:43.045541  188133 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 20:58:43.045585  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:43.045589  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 20:58:43.045643  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0731 20:58:43.045710  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0731 20:58:43.045730  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0731 20:58:43.045741  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 20:58:43.045780  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 20:58:43.045823  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0731 20:58:43.122382  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0731 20:58:43.122429  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0731 20:58:43.122449  188133 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0731 20:58:43.122489  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 20:58:43.122497  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0731 20:58:43.122513  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0731 20:58:43.122517  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0731 20:58:43.122588  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 20:58:43.122637  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 20:58:43.122643  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0731 20:58:43.122731  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 20:58:43.522969  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:58:41.037393  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Start
	I0731 20:58:41.037575  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Ensuring networks are active...
	I0731 20:58:41.038366  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Ensuring network default is active
	I0731 20:58:41.038703  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Ensuring network mk-default-k8s-diff-port-125614 is active
	I0731 20:58:41.039402  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Getting domain xml...
	I0731 20:58:41.040218  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Creating domain...
	I0731 20:58:42.319123  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting to get IP...
	I0731 20:58:42.320314  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.320801  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.320908  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:42.320797  189429 retry.go:31] will retry after 274.801111ms: waiting for machine to come up
	I0731 20:58:42.597444  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.597866  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.597914  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:42.597842  189429 retry.go:31] will retry after 382.328248ms: waiting for machine to come up
	I0731 20:58:42.981533  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.982018  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.982051  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:42.981955  189429 retry.go:31] will retry after 426.247953ms: waiting for machine to come up
	I0731 20:58:43.409523  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:43.409839  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:43.409867  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:43.409795  189429 retry.go:31] will retry after 483.501118ms: waiting for machine to come up
	I0731 20:58:43.894451  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:43.894844  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:43.894874  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:43.894779  189429 retry.go:31] will retry after 759.968593ms: waiting for machine to come up
	I0731 20:58:44.656097  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:44.656551  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:44.656580  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:44.656503  189429 retry.go:31] will retry after 766.563008ms: waiting for machine to come up
	I0731 20:58:45.424382  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:45.424793  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:45.424831  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:45.424744  189429 retry.go:31] will retry after 1.172047019s: waiting for machine to come up
	I0731 20:58:45.107333  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.984807614s)
	I0731 20:58:45.107368  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0731 20:58:45.107393  188133 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0731 20:58:45.107452  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0731 20:58:45.107471  188133 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0: (1.98485492s)
	I0731 20:58:45.107523  188133 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.985012474s)
	I0731 20:58:45.107534  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0731 20:58:45.107560  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0731 20:58:45.107563  188133 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.984910291s)
	I0731 20:58:45.107585  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0731 20:58:45.107609  188133 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.984862504s)
	I0731 20:58:45.107619  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 20:58:45.107626  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0731 20:58:45.107668  188133 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.584674739s)
	I0731 20:58:45.107701  188133 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0731 20:58:45.107729  188133 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:58:45.107761  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:48.706832  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.599347822s)
	I0731 20:58:48.706872  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0731 20:58:48.706886  188133 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (3.599247467s)
	I0731 20:58:48.706923  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0731 20:58:48.706898  188133 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 20:58:48.706925  188133 ssh_runner.go:235] Completed: which crictl: (3.599146318s)
	I0731 20:58:48.706979  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:58:48.706980  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 20:58:48.747292  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 20:58:48.747415  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0731 20:58:46.598636  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:46.599086  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:46.599117  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:46.599033  189429 retry.go:31] will retry after 1.204122239s: waiting for machine to come up
	I0731 20:58:47.805441  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:47.805922  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:47.805953  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:47.805864  189429 retry.go:31] will retry after 1.466632244s: waiting for machine to come up
	I0731 20:58:49.274527  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:49.275004  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:49.275030  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:49.274961  189429 retry.go:31] will retry after 2.04848438s: waiting for machine to come up
	I0731 20:58:50.902082  188133 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.154633427s)
	I0731 20:58:50.902138  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0731 20:58:50.902203  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.195118092s)
	I0731 20:58:50.902226  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0731 20:58:50.902259  188133 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 20:58:50.902320  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 20:58:52.863335  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.960989386s)
	I0731 20:58:52.863370  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0731 20:58:52.863394  188133 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 20:58:52.863434  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 20:58:51.324633  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:51.325056  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:51.325080  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:51.324983  189429 retry.go:31] will retry after 1.991151757s: waiting for machine to come up
	I0731 20:58:53.318784  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:53.319133  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:53.319164  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:53.319077  189429 retry.go:31] will retry after 2.631932264s: waiting for machine to come up
	I0731 20:58:54.629811  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.766355185s)
	I0731 20:58:54.629840  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0731 20:58:54.629882  188133 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 20:58:54.629954  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 20:58:55.983610  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.353622135s)
	I0731 20:58:55.983655  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0731 20:58:55.983692  188133 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0731 20:58:55.983764  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0731 20:58:56.828512  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0731 20:58:56.828560  188133 cache_images.go:123] Successfully loaded all cached images
	I0731 20:58:56.828568  188133 cache_images.go:92] duration metric: took 14.276593942s to LoadCachedImages
	I0731 20:58:56.828583  188133 kubeadm.go:934] updating node { 192.168.72.239 8443 v1.31.0-beta.0 crio true true} ...
	I0731 20:58:56.828722  188133 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-916885 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.239
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-916885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:58:56.828806  188133 ssh_runner.go:195] Run: crio config
	I0731 20:58:56.877187  188133 cni.go:84] Creating CNI manager for ""
	I0731 20:58:56.877222  188133 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:58:56.877245  188133 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:58:56.877269  188133 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.239 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-916885 NodeName:no-preload-916885 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.239"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.239 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 20:58:56.877442  188133 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.239
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-916885"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.239
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.239"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:58:56.877526  188133 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0731 20:58:56.887721  188133 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:58:56.887796  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 20:58:56.896845  188133 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0731 20:58:56.912886  188133 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0731 20:58:56.928914  188133 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0731 20:58:56.945604  188133 ssh_runner.go:195] Run: grep 192.168.72.239	control-plane.minikube.internal$ /etc/hosts
	I0731 20:58:56.949538  188133 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.239	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:58:56.961490  188133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:58:57.075114  188133 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:58:57.091701  188133 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885 for IP: 192.168.72.239
	I0731 20:58:57.091724  188133 certs.go:194] generating shared ca certs ...
	I0731 20:58:57.091743  188133 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:58:57.091909  188133 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 20:58:57.091959  188133 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 20:58:57.091971  188133 certs.go:256] generating profile certs ...
	I0731 20:58:57.092062  188133 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/client.key
	I0731 20:58:57.092141  188133 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/apiserver.key.cc7e9c96
	I0731 20:58:57.092193  188133 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/proxy-client.key
	I0731 20:58:57.092330  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 20:58:57.092405  188133 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 20:58:57.092423  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 20:58:57.092458  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:58:57.092489  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:58:57.092520  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 20:58:57.092586  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:58:57.093296  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:58:57.139431  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:58:57.169132  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:58:57.196541  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 20:58:57.232826  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 20:58:57.260875  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 20:58:57.290195  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:58:57.316645  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 20:58:57.339741  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:58:57.362406  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 20:58:57.385009  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 20:58:57.407540  188133 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:58:57.423697  188133 ssh_runner.go:195] Run: openssl version
	I0731 20:58:57.429741  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:58:57.440545  188133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:58:57.444984  188133 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:58:57.445035  188133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:58:57.450651  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:58:57.460547  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 20:58:57.470575  188133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 20:58:57.474939  188133 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 20:58:57.474988  188133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 20:58:57.480481  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 20:58:57.490404  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 20:58:57.500433  188133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 20:58:57.504785  188133 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 20:58:57.504835  188133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 20:58:57.510165  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:58:57.520019  188133 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:58:57.524596  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 20:58:57.530667  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 20:58:57.536315  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 20:58:57.542049  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 20:58:57.547594  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 20:58:57.553084  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 20:58:57.558419  188133 kubeadm.go:392] StartCluster: {Name:no-preload-916885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-916885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.239 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:58:57.558501  188133 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:58:57.558537  188133 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:58:57.600004  188133 cri.go:89] found id: ""
	I0731 20:58:57.600087  188133 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 20:58:57.609911  188133 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 20:58:57.609933  188133 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 20:58:57.609975  188133 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 20:58:57.619498  188133 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:58:57.621885  188133 kubeconfig.go:125] found "no-preload-916885" server: "https://192.168.72.239:8443"
	I0731 20:58:57.624838  188133 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 20:58:57.633984  188133 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.239
	I0731 20:58:57.634025  188133 kubeadm.go:1160] stopping kube-system containers ...
	I0731 20:58:57.634037  188133 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 20:58:57.634080  188133 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:58:57.672988  188133 cri.go:89] found id: ""
	I0731 20:58:57.673053  188133 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 20:58:57.689149  188133 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:58:57.698520  188133 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:58:57.698541  188133 kubeadm.go:157] found existing configuration files:
	
	I0731 20:58:57.698595  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 20:58:57.707106  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:58:57.707163  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:58:57.715878  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 20:58:57.724169  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:58:57.724219  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:58:57.732890  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 20:58:57.741121  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:58:57.741174  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:58:57.749776  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 20:58:57.758063  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:58:57.758114  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:58:57.766815  188133 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 20:58:57.775595  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:58:57.883689  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:58:58.740684  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:58:58.926231  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:58:58.987089  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:58:59.049782  188133 api_server.go:52] waiting for apiserver process to appear ...
	I0731 20:58:59.049862  188133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:00.418227  188656 start.go:364] duration metric: took 3m46.480116699s to acquireMachinesLock for "old-k8s-version-239115"
	I0731 20:59:00.418294  188656 start.go:96] Skipping create...Using existing machine configuration
	I0731 20:59:00.418302  188656 fix.go:54] fixHost starting: 
	I0731 20:59:00.418738  188656 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:00.418773  188656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:00.438533  188656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37127
	I0731 20:59:00.438963  188656 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:00.439499  188656 main.go:141] libmachine: Using API Version  1
	I0731 20:59:00.439524  188656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:00.439930  188656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:00.441449  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:00.441651  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetState
	I0731 20:59:00.443465  188656 fix.go:112] recreateIfNeeded on old-k8s-version-239115: state=Stopped err=<nil>
	I0731 20:59:00.443505  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	W0731 20:59:00.443679  188656 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 20:59:00.445840  188656 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-239115" ...
	I0731 20:58:55.953940  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:55.954393  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:55.954422  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:55.954356  189429 retry.go:31] will retry after 3.068212527s: waiting for machine to come up
	I0731 20:58:59.025966  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.026388  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has current primary IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.026406  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Found IP for machine: 192.168.50.221
	I0731 20:58:59.026417  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Reserving static IP address...
	I0731 20:58:59.026867  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Reserved static IP address: 192.168.50.221
	I0731 20:58:59.026918  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-125614", mac: "52:54:00:c8:c7:f0", ip: "192.168.50.221"} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.026933  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for SSH to be available...
	I0731 20:58:59.026954  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | skip adding static IP to network mk-default-k8s-diff-port-125614 - found existing host DHCP lease matching {name: "default-k8s-diff-port-125614", mac: "52:54:00:c8:c7:f0", ip: "192.168.50.221"}
	I0731 20:58:59.026972  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Getting to WaitForSSH function...
	I0731 20:58:59.029330  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.029685  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.029720  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.029820  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Using SSH client type: external
	I0731 20:58:59.029846  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa (-rw-------)
	I0731 20:58:59.029877  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.221 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:58:59.029894  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | About to run SSH command:
	I0731 20:58:59.029906  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | exit 0
	I0731 20:58:59.161209  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | SSH cmd err, output: <nil>: 
	I0731 20:58:59.161713  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetConfigRaw
	I0731 20:58:59.162331  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetIP
	I0731 20:58:59.164645  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.164953  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.164986  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.165269  188266 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/config.json ...
	I0731 20:58:59.165479  188266 machine.go:94] provisionDockerMachine start ...
	I0731 20:58:59.165503  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:58:59.165692  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.167796  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.168065  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.168110  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.168247  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:58:59.168408  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.168626  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.168763  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:58:59.168901  188266 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:59.169103  188266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0731 20:58:59.169115  188266 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 20:58:59.281875  188266 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 20:58:59.281901  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetMachineName
	I0731 20:58:59.282185  188266 buildroot.go:166] provisioning hostname "default-k8s-diff-port-125614"
	I0731 20:58:59.282218  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetMachineName
	I0731 20:58:59.282460  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.284970  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.285461  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.285498  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.285612  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:58:59.285814  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.286004  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.286139  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:58:59.286278  188266 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:59.286445  188266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0731 20:58:59.286460  188266 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-125614 && echo "default-k8s-diff-port-125614" | sudo tee /etc/hostname
	I0731 20:58:59.411873  188266 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-125614
	
	I0731 20:58:59.411904  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.414733  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.415065  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.415099  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.415274  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:58:59.415463  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.415604  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.415751  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:58:59.415898  188266 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:59.416074  188266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0731 20:58:59.416090  188266 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-125614' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-125614/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-125614' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:58:59.539168  188266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:58:59.539210  188266 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 20:58:59.539247  188266 buildroot.go:174] setting up certificates
	I0731 20:58:59.539256  188266 provision.go:84] configureAuth start
	I0731 20:58:59.539267  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetMachineName
	I0731 20:58:59.539595  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetIP
	I0731 20:58:59.542447  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.542887  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.542916  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.543103  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.545597  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.545972  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.545992  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.546128  188266 provision.go:143] copyHostCerts
	I0731 20:58:59.546195  188266 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 20:58:59.546206  188266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 20:58:59.546265  188266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 20:58:59.546366  188266 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 20:58:59.546377  188266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 20:58:59.546407  188266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 20:58:59.546488  188266 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 20:58:59.546492  188266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 20:58:59.546517  188266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 20:58:59.546565  188266 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-125614 san=[127.0.0.1 192.168.50.221 default-k8s-diff-port-125614 localhost minikube]
	I0731 20:58:59.690753  188266 provision.go:177] copyRemoteCerts
	I0731 20:58:59.690811  188266 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:58:59.690839  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.693800  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.694141  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.694175  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.694383  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:58:59.694583  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.694748  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:58:59.694884  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:58:59.783710  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:58:59.814512  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0731 20:58:59.843492  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 20:58:59.867793  188266 provision.go:87] duration metric: took 328.521723ms to configureAuth
	I0731 20:58:59.867840  188266 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:58:59.868013  188266 config.go:182] Loaded profile config "default-k8s-diff-port-125614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:58:59.868089  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.871214  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.871655  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.871684  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.871875  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:58:59.872127  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.872309  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.872503  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:58:59.872687  188266 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:59.872909  188266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0731 20:58:59.872935  188266 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:59:00.165458  188266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:59:00.165492  188266 machine.go:97] duration metric: took 999.996831ms to provisionDockerMachine
	I0731 20:59:00.165509  188266 start.go:293] postStartSetup for "default-k8s-diff-port-125614" (driver="kvm2")
	I0731 20:59:00.165527  188266 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:59:00.165549  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:00.165936  188266 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:59:00.165973  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:00.168477  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.168837  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:00.168864  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.168991  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:00.169203  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:00.169387  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:00.169575  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:00.262132  188266 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:59:00.266596  188266 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:59:00.266621  188266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 20:59:00.266695  188266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 20:59:00.266789  188266 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 20:59:00.266909  188266 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:59:00.276407  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:00.300017  188266 start.go:296] duration metric: took 134.490488ms for postStartSetup
	I0731 20:59:00.300061  188266 fix.go:56] duration metric: took 19.289494966s for fixHost
	I0731 20:59:00.300087  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:00.302714  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.303073  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:00.303106  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.303249  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:00.303448  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:00.303633  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:00.303786  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:00.303978  188266 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:00.304204  188266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0731 20:59:00.304217  188266 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:59:00.418073  188266 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722459540.389901096
	
	I0731 20:59:00.418096  188266 fix.go:216] guest clock: 1722459540.389901096
	I0731 20:59:00.418105  188266 fix.go:229] Guest: 2024-07-31 20:59:00.389901096 +0000 UTC Remote: 2024-07-31 20:59:00.30006642 +0000 UTC m=+284.542031804 (delta=89.834676ms)
	I0731 20:59:00.418130  188266 fix.go:200] guest clock delta is within tolerance: 89.834676ms
	I0731 20:59:00.418138  188266 start.go:83] releasing machines lock for "default-k8s-diff-port-125614", held for 19.407605953s
	I0731 20:59:00.418167  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:00.418669  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetIP
	I0731 20:59:00.421683  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.422050  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:00.422090  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.422234  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:00.422799  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:00.422999  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:00.423061  188266 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:59:00.423119  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:00.423354  188266 ssh_runner.go:195] Run: cat /version.json
	I0731 20:59:00.423378  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:00.426188  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.426362  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.426603  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:00.426631  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.426790  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:00.426882  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:00.426929  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.427019  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:00.427197  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:00.427208  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:00.427363  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:00.427380  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:00.427523  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:00.427668  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:00.511834  188266 ssh_runner.go:195] Run: systemctl --version
	I0731 20:59:00.536649  188266 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:59:00.692463  188266 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:59:00.700344  188266 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:59:00.700413  188266 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:59:00.721837  188266 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:59:00.721863  188266 start.go:495] detecting cgroup driver to use...
	I0731 20:59:00.721940  188266 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:59:00.742477  188266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:59:00.760049  188266 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:59:00.760120  188266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:59:00.777823  188266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:59:00.791680  188266 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:59:00.908094  188266 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:59:01.051284  188266 docker.go:233] disabling docker service ...
	I0731 20:59:01.051379  188266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:59:01.070927  188266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:59:01.083393  188266 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:59:01.223186  188266 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:59:01.355265  188266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:59:01.369810  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:59:01.390523  188266 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 20:59:01.390588  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.401241  188266 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:59:01.401308  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.412049  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.422145  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.432523  188266 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:59:01.442640  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.456933  188266 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.475628  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.486226  188266 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:59:01.496757  188266 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:59:01.496813  188266 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:59:01.510264  188266 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:59:01.520231  188266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:01.636451  188266 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:59:01.784134  188266 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:59:01.784220  188266 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:59:01.788836  188266 start.go:563] Will wait 60s for crictl version
	I0731 20:59:01.788895  188266 ssh_runner.go:195] Run: which crictl
	I0731 20:59:01.793059  188266 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:59:01.840110  188266 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:59:01.840200  188266 ssh_runner.go:195] Run: crio --version
	I0731 20:59:01.868816  188266 ssh_runner.go:195] Run: crio --version
	I0731 20:59:01.908539  188266 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 20:59:00.447208  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .Start
	I0731 20:59:00.447389  188656 main.go:141] libmachine: (old-k8s-version-239115) Ensuring networks are active...
	I0731 20:59:00.448116  188656 main.go:141] libmachine: (old-k8s-version-239115) Ensuring network default is active
	I0731 20:59:00.448589  188656 main.go:141] libmachine: (old-k8s-version-239115) Ensuring network mk-old-k8s-version-239115 is active
	I0731 20:59:00.448892  188656 main.go:141] libmachine: (old-k8s-version-239115) Getting domain xml...
	I0731 20:59:00.450110  188656 main.go:141] libmachine: (old-k8s-version-239115) Creating domain...
	I0731 20:59:01.823554  188656 main.go:141] libmachine: (old-k8s-version-239115) Waiting to get IP...
	I0731 20:59:01.824648  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:01.825111  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:01.825172  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:01.825080  189574 retry.go:31] will retry after 241.700507ms: waiting for machine to come up
	I0731 20:59:02.068913  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:02.069608  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:02.069738  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:02.069663  189574 retry.go:31] will retry after 258.921821ms: waiting for machine to come up
	I0731 20:59:02.330231  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:02.330846  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:02.330877  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:02.330776  189574 retry.go:31] will retry after 460.911793ms: waiting for machine to come up
	I0731 20:59:02.793718  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:02.794177  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:02.794206  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:02.794156  189574 retry.go:31] will retry after 380.241989ms: waiting for machine to come up
	I0731 20:59:03.175918  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:03.176761  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:03.176786  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:03.176670  189574 retry.go:31] will retry after 631.876736ms: waiting for machine to come up
	I0731 20:59:03.810803  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:03.811478  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:03.811503  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:03.811366  189574 retry.go:31] will retry after 583.328017ms: waiting for machine to come up
	I0731 20:58:59.550347  188133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:00.050077  188133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:00.066942  188133 api_server.go:72] duration metric: took 1.017157745s to wait for apiserver process to appear ...
	I0731 20:59:00.066991  188133 api_server.go:88] waiting for apiserver healthz status ...
	I0731 20:59:00.067016  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:00.067685  188133 api_server.go:269] stopped: https://192.168.72.239:8443/healthz: Get "https://192.168.72.239:8443/healthz": dial tcp 192.168.72.239:8443: connect: connection refused
	I0731 20:59:00.567237  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:03.555694  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:03.555739  188133 api_server.go:103] status: https://192.168.72.239:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:03.555756  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:03.606602  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:03.606641  188133 api_server.go:103] status: https://192.168.72.239:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:03.606657  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:03.617900  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:03.617935  188133 api_server.go:103] status: https://192.168.72.239:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:04.067724  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:04.073838  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:04.073875  188133 api_server.go:103] status: https://192.168.72.239:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:04.568116  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:04.575013  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:04.575044  188133 api_server.go:103] status: https://192.168.72.239:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:05.067154  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:05.073314  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 200:
	ok
	I0731 20:59:05.083559  188133 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 20:59:05.083595  188133 api_server.go:131] duration metric: took 5.016595337s to wait for apiserver health ...
	I0731 20:59:05.083606  188133 cni.go:84] Creating CNI manager for ""
	I0731 20:59:05.083614  188133 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:05.085564  188133 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 20:59:01.910091  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetIP
	I0731 20:59:01.913322  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:01.913714  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:01.913747  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:01.914046  188266 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0731 20:59:01.918504  188266 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:01.930599  188266 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-125614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-125614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:59:01.930756  188266 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:59:01.930826  188266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:01.968796  188266 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 20:59:01.968882  188266 ssh_runner.go:195] Run: which lz4
	I0731 20:59:01.974123  188266 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 20:59:01.979542  188266 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 20:59:01.979575  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 20:59:03.529579  188266 crio.go:462] duration metric: took 1.555502358s to copy over tarball
	I0731 20:59:03.529662  188266 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 20:59:04.395886  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:04.396400  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:04.396664  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:04.396347  189574 retry.go:31] will retry after 1.154504022s: waiting for machine to come up
	I0731 20:59:05.552240  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:05.552879  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:05.552901  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:05.552831  189574 retry.go:31] will retry after 1.037365333s: waiting for machine to come up
	I0731 20:59:06.591875  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:06.592416  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:06.592450  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:06.592329  189574 retry.go:31] will retry after 1.249444079s: waiting for machine to come up
	I0731 20:59:07.843058  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:07.843436  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:07.843463  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:07.843370  189574 retry.go:31] will retry after 1.700521776s: waiting for machine to come up
	I0731 20:59:05.087080  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 20:59:05.105303  188133 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 20:59:05.125019  188133 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 20:59:05.136768  188133 system_pods.go:59] 8 kube-system pods found
	I0731 20:59:05.136823  188133 system_pods.go:61] "coredns-5cfdc65f69-c9gcf" [3b9458d3-81d0-4138-8a6a-81f087c3ed02] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 20:59:05.136836  188133 system_pods.go:61] "etcd-no-preload-916885" [aa31006d-8e74-48c2-9b5d-5604b3a1c283] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 20:59:05.136847  188133 system_pods.go:61] "kube-apiserver-no-preload-916885" [64549ba0-8e30-41d3-82eb-cdb729623a9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 20:59:05.136856  188133 system_pods.go:61] "kube-controller-manager-no-preload-916885" [2620c741-c27a-4df5-8555-58767d43c675] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 20:59:05.136866  188133 system_pods.go:61] "kube-proxy-99jgm" [0060c1a0-badc-401c-a4dc-5cfaa958654e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 20:59:05.136880  188133 system_pods.go:61] "kube-scheduler-no-preload-916885" [f02a0a1d-5cbb-4ee3-a084-21710667565e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 20:59:05.136894  188133 system_pods.go:61] "metrics-server-78fcd8795b-jrzgg" [acbe48be-32e9-44f8-9bf2-52e0e92a09e0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 20:59:05.136912  188133 system_pods.go:61] "storage-provisioner" [d0f902cd-d1db-4c70-bdad-34bda917cec1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 20:59:05.136926  188133 system_pods.go:74] duration metric: took 11.882384ms to wait for pod list to return data ...
	I0731 20:59:05.136937  188133 node_conditions.go:102] verifying NodePressure condition ...
	I0731 20:59:05.142117  188133 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 20:59:05.142149  188133 node_conditions.go:123] node cpu capacity is 2
	I0731 20:59:05.142165  188133 node_conditions.go:105] duration metric: took 5.221098ms to run NodePressure ...
	I0731 20:59:05.142187  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:05.534597  188133 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 20:59:05.539583  188133 kubeadm.go:739] kubelet initialised
	I0731 20:59:05.539604  188133 kubeadm.go:740] duration metric: took 4.980297ms waiting for restarted kubelet to initialise ...
	I0731 20:59:05.539626  188133 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:59:05.544498  188133 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-c9gcf" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:07.778624  188133 pod_ready.go:102] pod "coredns-5cfdc65f69-c9gcf" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:06.024682  188266 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.494984583s)
	I0731 20:59:06.024718  188266 crio.go:469] duration metric: took 2.495107603s to extract the tarball
	I0731 20:59:06.024729  188266 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 20:59:06.062675  188266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:06.107619  188266 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 20:59:06.107649  188266 cache_images.go:84] Images are preloaded, skipping loading
	I0731 20:59:06.107667  188266 kubeadm.go:934] updating node { 192.168.50.221 8444 v1.30.3 crio true true} ...
	I0731 20:59:06.107792  188266 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-125614 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-125614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:59:06.107863  188266 ssh_runner.go:195] Run: crio config
	I0731 20:59:06.173983  188266 cni.go:84] Creating CNI manager for ""
	I0731 20:59:06.174007  188266 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:06.174019  188266 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:59:06.174040  188266 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.221 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-125614 NodeName:default-k8s-diff-port-125614 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 20:59:06.174168  188266 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.221
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-125614"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.221
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.221"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:59:06.174233  188266 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 20:59:06.185059  188266 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:59:06.185189  188266 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 20:59:06.196571  188266 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0731 20:59:06.218964  188266 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:59:06.239033  188266 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0731 20:59:06.260519  188266 ssh_runner.go:195] Run: grep 192.168.50.221	control-plane.minikube.internal$ /etc/hosts
	I0731 20:59:06.264718  188266 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.221	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:06.278173  188266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:06.423941  188266 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:59:06.441663  188266 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614 for IP: 192.168.50.221
	I0731 20:59:06.441689  188266 certs.go:194] generating shared ca certs ...
	I0731 20:59:06.441711  188266 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:06.441906  188266 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 20:59:06.441965  188266 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 20:59:06.441978  188266 certs.go:256] generating profile certs ...
	I0731 20:59:06.442080  188266 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/client.key
	I0731 20:59:06.442157  188266 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/apiserver.key.9cb12361
	I0731 20:59:06.442205  188266 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/proxy-client.key
	I0731 20:59:06.442354  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 20:59:06.442391  188266 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 20:59:06.442404  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 20:59:06.442447  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:59:06.442478  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:59:06.442522  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 20:59:06.442580  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:06.443470  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:59:06.497056  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:59:06.530978  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:59:06.574533  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 20:59:06.619523  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0731 20:59:06.648269  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 20:59:06.677824  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:59:06.704450  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 20:59:06.731606  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 20:59:06.756990  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 20:59:06.781214  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:59:06.804855  188266 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:59:06.821531  188266 ssh_runner.go:195] Run: openssl version
	I0731 20:59:06.827394  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 20:59:06.838680  188266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 20:59:06.843618  188266 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 20:59:06.843681  188266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 20:59:06.850238  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:59:06.865533  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:59:06.881516  188266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:06.886809  188266 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:06.886876  188266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:06.893345  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:59:06.908919  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 20:59:06.922150  188266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 20:59:06.927165  188266 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 20:59:06.927226  188266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 20:59:06.933724  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 20:59:06.946420  188266 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:59:06.951347  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 20:59:06.959595  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 20:59:06.967808  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 20:59:06.977083  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 20:59:06.985089  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 20:59:06.992190  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 20:59:06.998458  188266 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-125614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-125614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:59:06.998548  188266 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:59:06.998592  188266 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:07.053176  188266 cri.go:89] found id: ""
	I0731 20:59:07.053256  188266 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 20:59:07.064373  188266 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 20:59:07.064392  188266 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 20:59:07.064433  188266 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 20:59:07.075167  188266 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:59:07.076057  188266 kubeconfig.go:125] found "default-k8s-diff-port-125614" server: "https://192.168.50.221:8444"
	I0731 20:59:07.078091  188266 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 20:59:07.089136  188266 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.221
	I0731 20:59:07.089161  188266 kubeadm.go:1160] stopping kube-system containers ...
	I0731 20:59:07.089174  188266 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 20:59:07.089225  188266 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:07.133015  188266 cri.go:89] found id: ""
	I0731 20:59:07.133099  188266 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 20:59:07.155229  188266 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:59:07.166326  188266 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:59:07.166348  188266 kubeadm.go:157] found existing configuration files:
	
	I0731 20:59:07.166418  188266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0731 20:59:07.176709  188266 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:59:07.176768  188266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:59:07.187232  188266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0731 20:59:07.197376  188266 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:59:07.197453  188266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:59:07.209451  188266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0731 20:59:07.221141  188266 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:59:07.221205  188266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:59:07.232016  188266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0731 20:59:07.242340  188266 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:59:07.242402  188266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:59:07.253794  188266 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 20:59:07.264912  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:07.382193  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:08.445321  188266 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.063086935s)
	I0731 20:59:08.445364  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:08.664603  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:08.744053  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:08.857284  188266 api_server.go:52] waiting for apiserver process to appear ...
	I0731 20:59:08.857380  188266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:09.357505  188266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:09.857488  188266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:09.887329  188266 api_server.go:72] duration metric: took 1.030046485s to wait for apiserver process to appear ...
	I0731 20:59:09.887358  188266 api_server.go:88] waiting for apiserver healthz status ...
	I0731 20:59:09.887405  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:09.887966  188266 api_server.go:269] stopped: https://192.168.50.221:8444/healthz: Get "https://192.168.50.221:8444/healthz": dial tcp 192.168.50.221:8444: connect: connection refused
	I0731 20:59:10.387674  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:09.545937  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:09.546581  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:09.546605  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:09.546529  189574 retry.go:31] will retry after 1.934269586s: waiting for machine to come up
	I0731 20:59:11.482402  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:11.482794  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:11.482823  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:11.482744  189574 retry.go:31] will retry after 2.575131422s: waiting for machine to come up
	I0731 20:59:10.053236  188133 pod_ready.go:102] pod "coredns-5cfdc65f69-c9gcf" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:10.551437  188133 pod_ready.go:92] pod "coredns-5cfdc65f69-c9gcf" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:10.551467  188133 pod_ready.go:81] duration metric: took 5.006944467s for pod "coredns-5cfdc65f69-c9gcf" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:10.551480  188133 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:12.559346  188133 pod_ready.go:102] pod "etcd-no-preload-916885" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:12.827297  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:12.827342  188266 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:12.827390  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:12.883496  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:12.883538  188266 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:12.887715  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:12.902715  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:12.902746  188266 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:13.388340  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:13.392840  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:13.392872  188266 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:13.888510  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:13.894519  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:13.894553  188266 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:14.388177  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:14.392557  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 200:
	ok
	I0731 20:59:14.399285  188266 api_server.go:141] control plane version: v1.30.3
	I0731 20:59:14.399321  188266 api_server.go:131] duration metric: took 4.511955505s to wait for apiserver health ...
	I0731 20:59:14.399333  188266 cni.go:84] Creating CNI manager for ""
	I0731 20:59:14.399340  188266 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:14.400987  188266 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 20:59:14.401981  188266 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 20:59:14.420648  188266 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 20:59:14.441909  188266 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 20:59:14.451365  188266 system_pods.go:59] 8 kube-system pods found
	I0731 20:59:14.451406  188266 system_pods.go:61] "coredns-7db6d8ff4d-gnrgs" [203ddf96-11cf-4fd3-8920-aa787815ad1a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 20:59:14.451419  188266 system_pods.go:61] "etcd-default-k8s-diff-port-125614" [7b9a74f5-b8df-457d-b4be-26e3f268e74a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 20:59:14.451426  188266 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-125614" [a2db9a6a-1d7f-4bb6-9f54-2d74676bba7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 20:59:14.451432  188266 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-125614" [3cd52038-e450-46ea-91a7-494bdfeb386e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 20:59:14.451438  188266 system_pods.go:61] "kube-proxy-csdc4" [24077c7d-f54c-4a54-9791-742327f2a9d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 20:59:14.451444  188266 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-125614" [4b9b4f87-74a3-4768-b3ca-3226a4db7105] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 20:59:14.451461  188266 system_pods.go:61] "metrics-server-569cc877fc-jf52w" [00b07830-8180-43c0-83c7-e68d399ae0ef] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 20:59:14.451468  188266 system_pods.go:61] "storage-provisioner" [efc60c19-af1b-426e-82e2-5fb9a2d1fb3a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 20:59:14.451476  188266 system_pods.go:74] duration metric: took 9.546534ms to wait for pod list to return data ...
	I0731 20:59:14.451486  188266 node_conditions.go:102] verifying NodePressure condition ...
	I0731 20:59:14.454760  188266 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 20:59:14.454784  188266 node_conditions.go:123] node cpu capacity is 2
	I0731 20:59:14.454795  188266 node_conditions.go:105] duration metric: took 3.303087ms to run NodePressure ...
	I0731 20:59:14.454820  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:14.730635  188266 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 20:59:14.735144  188266 kubeadm.go:739] kubelet initialised
	I0731 20:59:14.735165  188266 kubeadm.go:740] duration metric: took 4.500388ms waiting for restarted kubelet to initialise ...
	I0731 20:59:14.735173  188266 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:59:14.742292  188266 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:14.749460  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.749486  188266 pod_ready.go:81] duration metric: took 7.166399ms for pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:14.749496  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.749504  188266 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:14.757068  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.757091  188266 pod_ready.go:81] duration metric: took 7.579526ms for pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:14.757101  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.757109  188266 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:14.762181  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.762203  188266 pod_ready.go:81] duration metric: took 5.083756ms for pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:14.762213  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.762219  188266 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:14.845070  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.845095  188266 pod_ready.go:81] duration metric: took 82.86894ms for pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:14.845107  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.845113  188266 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-csdc4" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:15.246100  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "kube-proxy-csdc4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:15.246131  188266 pod_ready.go:81] duration metric: took 401.011321ms for pod "kube-proxy-csdc4" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:15.246150  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "kube-proxy-csdc4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:15.246159  188266 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:15.645657  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:15.645689  188266 pod_ready.go:81] duration metric: took 399.519543ms for pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:15.645704  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:15.645713  188266 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:16.045744  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:16.045776  188266 pod_ready.go:81] duration metric: took 400.053102ms for pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:16.045791  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:16.045800  188266 pod_ready.go:38] duration metric: took 1.310615323s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:59:16.045838  188266 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 20:59:16.059046  188266 ops.go:34] apiserver oom_adj: -16
	I0731 20:59:16.059071  188266 kubeadm.go:597] duration metric: took 8.994671774s to restartPrimaryControlPlane
	I0731 20:59:16.059082  188266 kubeadm.go:394] duration metric: took 9.060633072s to StartCluster
	I0731 20:59:16.059104  188266 settings.go:142] acquiring lock: {Name:mk1f43113b3e416d694056e4890ac819b020378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:16.059181  188266 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 20:59:16.060895  188266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:16.061143  188266 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 20:59:16.061226  188266 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 20:59:16.061324  188266 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-125614"
	I0731 20:59:16.061386  188266 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-125614"
	W0731 20:59:16.061399  188266 addons.go:243] addon storage-provisioner should already be in state true
	I0731 20:59:16.061388  188266 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-125614"
	I0731 20:59:16.061400  188266 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-125614"
	I0731 20:59:16.061453  188266 config.go:182] Loaded profile config "default-k8s-diff-port-125614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:59:16.061495  188266 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-125614"
	W0731 20:59:16.061516  188266 addons.go:243] addon metrics-server should already be in state true
	I0731 20:59:16.061438  188266 host.go:66] Checking if "default-k8s-diff-port-125614" exists ...
	I0731 20:59:16.061603  188266 host.go:66] Checking if "default-k8s-diff-port-125614" exists ...
	I0731 20:59:16.061436  188266 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-125614"
	I0731 20:59:16.062072  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.062084  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.062085  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.062110  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.062127  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.062188  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.062822  188266 out.go:177] * Verifying Kubernetes components...
	I0731 20:59:16.064337  188266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:16.081194  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45591
	I0731 20:59:16.081208  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35871
	I0731 20:59:16.081197  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38383
	I0731 20:59:16.081872  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.081956  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.082026  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.082423  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.082439  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.082926  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.082951  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.083047  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.083058  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.083076  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.083712  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.083754  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.084871  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.085484  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.085734  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetState
	I0731 20:59:16.085815  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.085845  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.089827  188266 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-125614"
	W0731 20:59:16.089854  188266 addons.go:243] addon default-storageclass should already be in state true
	I0731 20:59:16.089884  188266 host.go:66] Checking if "default-k8s-diff-port-125614" exists ...
	I0731 20:59:16.090245  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.090301  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.106592  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38845
	I0731 20:59:16.106609  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34433
	I0731 20:59:16.108751  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.108849  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.109414  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.109442  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.109546  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.109576  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.109948  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.109953  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.110132  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetState
	I0731 20:59:16.110163  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetState
	I0731 20:59:16.111216  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36309
	I0731 20:59:16.111657  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.112217  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.112239  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.112319  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:16.113374  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.115608  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:16.115649  188266 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:59:16.115940  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.115979  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.116965  188266 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 20:59:16.117053  188266 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 20:59:16.117069  188266 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 20:59:16.117083  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:16.118247  188266 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 20:59:16.118268  188266 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 20:59:16.118288  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:16.120985  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.121540  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:16.121563  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.121764  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:16.121865  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.122099  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:16.122295  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:16.122371  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:16.122490  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.122552  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:16.122632  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:16.122850  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:16.123024  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:16.123218  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:16.133929  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34157
	I0731 20:59:16.134348  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.134844  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.134865  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.135175  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.135389  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetState
	I0731 20:59:16.136985  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:16.137272  188266 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 20:59:16.137287  188266 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 20:59:16.137313  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:16.140222  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.140543  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:16.140560  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:16.140762  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.140795  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:16.140969  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:16.141107  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:16.257677  188266 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:59:16.275791  188266 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-125614" to be "Ready" ...
	I0731 20:59:16.373528  188266 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 20:59:16.373552  188266 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 20:59:16.380797  188266 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 20:59:16.404028  188266 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 20:59:16.406072  188266 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 20:59:16.406098  188266 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 20:59:16.456003  188266 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 20:59:16.456030  188266 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 20:59:16.517304  188266 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 20:59:17.377438  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.377468  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.377514  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.377565  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.377765  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.377780  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.377790  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.377797  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.377827  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.377835  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.377930  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.378028  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.378028  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Closing plugin on server side
	I0731 20:59:17.378354  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Closing plugin on server side
	I0731 20:59:17.378417  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.378424  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.378569  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.378583  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.384110  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.384130  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.384325  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.384341  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.428457  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.428480  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.428766  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.428782  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.428790  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.428799  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.428804  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Closing plugin on server side
	I0731 20:59:17.429011  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.429024  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.429040  188266 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-125614"
	I0731 20:59:17.431884  188266 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 20:59:14.059385  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:14.059857  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:14.059879  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:14.059819  189574 retry.go:31] will retry after 3.127857327s: waiting for machine to come up
	I0731 20:59:17.189405  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:17.189871  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:17.189902  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:17.189821  189574 retry.go:31] will retry after 4.516767425s: waiting for machine to come up
	I0731 20:59:14.559493  188133 pod_ready.go:102] pod "etcd-no-preload-916885" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:16.561540  188133 pod_ready.go:92] pod "etcd-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:16.561568  188133 pod_ready.go:81] duration metric: took 6.010079286s for pod "etcd-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:16.561580  188133 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.068734  188133 pod_ready.go:92] pod "kube-apiserver-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:18.068756  188133 pod_ready.go:81] duration metric: took 1.507167128s for pod "kube-apiserver-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.068766  188133 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.073069  188133 pod_ready.go:92] pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:18.073086  188133 pod_ready.go:81] duration metric: took 4.313817ms for pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.073095  188133 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-99jgm" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.077480  188133 pod_ready.go:92] pod "kube-proxy-99jgm" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:18.077497  188133 pod_ready.go:81] duration metric: took 4.395483ms for pod "kube-proxy-99jgm" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.077506  188133 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.082197  188133 pod_ready.go:92] pod "kube-scheduler-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:18.082221  188133 pod_ready.go:81] duration metric: took 4.709042ms for pod "kube-scheduler-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.082234  188133 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:17.433072  188266 addons.go:510] duration metric: took 1.371850333s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 20:59:18.280135  188266 node_ready.go:53] node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:20.280881  188266 node_ready.go:53] node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:23.082812  187862 start.go:364] duration metric: took 58.27194035s to acquireMachinesLock for "embed-certs-831240"
	I0731 20:59:23.082866  187862 start.go:96] Skipping create...Using existing machine configuration
	I0731 20:59:23.082875  187862 fix.go:54] fixHost starting: 
	I0731 20:59:23.083267  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:23.083308  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:23.101291  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36679
	I0731 20:59:23.101826  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:23.102464  187862 main.go:141] libmachine: Using API Version  1
	I0731 20:59:23.102498  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:23.102817  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:23.103024  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:23.103187  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetState
	I0731 20:59:23.105117  187862 fix.go:112] recreateIfNeeded on embed-certs-831240: state=Stopped err=<nil>
	I0731 20:59:23.105143  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	W0731 20:59:23.105307  187862 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 20:59:23.106919  187862 out.go:177] * Restarting existing kvm2 VM for "embed-certs-831240" ...
	I0731 20:59:21.708296  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.708811  188656 main.go:141] libmachine: (old-k8s-version-239115) Found IP for machine: 192.168.61.51
	I0731 20:59:21.708846  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has current primary IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.708860  188656 main.go:141] libmachine: (old-k8s-version-239115) Reserving static IP address...
	I0731 20:59:21.709432  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "old-k8s-version-239115", mac: "52:54:00:5a:70:0d", ip: "192.168.61.51"} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.709663  188656 main.go:141] libmachine: (old-k8s-version-239115) Reserved static IP address: 192.168.61.51
	I0731 20:59:21.709695  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | skip adding static IP to network mk-old-k8s-version-239115 - found existing host DHCP lease matching {name: "old-k8s-version-239115", mac: "52:54:00:5a:70:0d", ip: "192.168.61.51"}
	I0731 20:59:21.709711  188656 main.go:141] libmachine: (old-k8s-version-239115) Waiting for SSH to be available...
	I0731 20:59:21.709723  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | Getting to WaitForSSH function...
	I0731 20:59:21.711911  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.712310  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.712345  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.712517  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | Using SSH client type: external
	I0731 20:59:21.712540  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa (-rw-------)
	I0731 20:59:21.712581  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:59:21.712598  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | About to run SSH command:
	I0731 20:59:21.712625  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | exit 0
	I0731 20:59:21.838026  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | SSH cmd err, output: <nil>: 
	I0731 20:59:21.838370  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetConfigRaw
	I0731 20:59:21.839169  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetIP
	I0731 20:59:21.842168  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.842588  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.842623  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.842866  188656 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/config.json ...
	I0731 20:59:21.843126  188656 machine.go:94] provisionDockerMachine start ...
	I0731 20:59:21.843150  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:21.843388  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:21.846148  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.846657  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.846686  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.846993  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:21.847165  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:21.847360  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:21.847530  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:21.847707  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:21.847938  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:21.847951  188656 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 20:59:21.955109  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 20:59:21.955143  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetMachineName
	I0731 20:59:21.955460  188656 buildroot.go:166] provisioning hostname "old-k8s-version-239115"
	I0731 20:59:21.955492  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetMachineName
	I0731 20:59:21.955728  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:21.958752  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.959146  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.959176  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.959395  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:21.959620  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:21.959781  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:21.959918  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:21.960078  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:21.960358  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:21.960378  188656 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-239115 && echo "old-k8s-version-239115" | sudo tee /etc/hostname
	I0731 20:59:22.090625  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-239115
	
	I0731 20:59:22.090665  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.093927  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.094356  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.094387  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.094729  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.094942  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.095153  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.095364  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.095583  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:22.095819  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:22.095845  188656 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-239115' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-239115/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-239115' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:59:22.217153  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:59:22.217189  188656 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 20:59:22.217215  188656 buildroot.go:174] setting up certificates
	I0731 20:59:22.217229  188656 provision.go:84] configureAuth start
	I0731 20:59:22.217242  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetMachineName
	I0731 20:59:22.217613  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetIP
	I0731 20:59:22.220640  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.221082  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.221125  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.221237  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.223811  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.224152  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.224180  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.224337  188656 provision.go:143] copyHostCerts
	I0731 20:59:22.224405  188656 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 20:59:22.224418  188656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 20:59:22.224485  188656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 20:59:22.224604  188656 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 20:59:22.224616  188656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 20:59:22.224654  188656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 20:59:22.224729  188656 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 20:59:22.224740  188656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 20:59:22.224766  188656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 20:59:22.224833  188656 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-239115 san=[127.0.0.1 192.168.61.51 localhost minikube old-k8s-version-239115]
	I0731 20:59:22.407532  188656 provision.go:177] copyRemoteCerts
	I0731 20:59:22.407599  188656 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:59:22.407625  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.410594  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.411007  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.411033  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.411338  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.411582  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.411811  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.412007  188656 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa Username:docker}
	I0731 20:59:22.492781  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:59:22.518278  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0731 20:59:22.543018  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 20:59:22.568888  188656 provision.go:87] duration metric: took 351.643ms to configureAuth
	I0731 20:59:22.568920  188656 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:59:22.569099  188656 config.go:182] Loaded profile config "old-k8s-version-239115": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 20:59:22.569169  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.572154  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.572471  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.572500  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.572669  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.572872  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.572993  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.573112  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.573249  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:22.573481  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:22.573512  188656 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:59:22.847156  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:59:22.847193  188656 machine.go:97] duration metric: took 1.004049055s to provisionDockerMachine
	I0731 20:59:22.847211  188656 start.go:293] postStartSetup for "old-k8s-version-239115" (driver="kvm2")
	I0731 20:59:22.847229  188656 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:59:22.847284  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:22.847710  188656 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:59:22.847741  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.850515  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.850935  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.850962  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.851088  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.851288  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.851524  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.851674  188656 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa Username:docker}
	I0731 20:59:22.932316  188656 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:59:22.936672  188656 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:59:22.936707  188656 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 20:59:22.936792  188656 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 20:59:22.936894  188656 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 20:59:22.937011  188656 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:59:22.946454  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:22.972952  188656 start.go:296] duration metric: took 125.72216ms for postStartSetup
	I0731 20:59:22.972996  188656 fix.go:56] duration metric: took 22.554695114s for fixHost
	I0731 20:59:22.973026  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.975758  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.976166  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.976198  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.976320  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.976585  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.976782  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.976966  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.977115  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:22.977275  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:22.977284  188656 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:59:23.082657  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722459563.026856067
	
	I0731 20:59:23.082683  188656 fix.go:216] guest clock: 1722459563.026856067
	I0731 20:59:23.082694  188656 fix.go:229] Guest: 2024-07-31 20:59:23.026856067 +0000 UTC Remote: 2024-07-31 20:59:22.973000729 +0000 UTC m=+249.171273714 (delta=53.855338ms)
	I0731 20:59:23.082721  188656 fix.go:200] guest clock delta is within tolerance: 53.855338ms
	I0731 20:59:23.082727  188656 start.go:83] releasing machines lock for "old-k8s-version-239115", held for 22.664459101s
	I0731 20:59:23.082752  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:23.083052  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetIP
	I0731 20:59:23.086626  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.087093  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:23.087135  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.087366  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:23.087954  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:23.088159  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:23.088251  188656 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:59:23.088303  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:23.088370  188656 ssh_runner.go:195] Run: cat /version.json
	I0731 20:59:23.088392  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:23.091710  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.091989  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.092073  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:23.092101  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.092227  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:23.092429  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:23.092472  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:23.092520  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.092618  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:23.092752  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:23.092803  188656 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa Username:docker}
	I0731 20:59:23.092931  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:23.093100  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:23.093255  188656 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa Username:docker}
	I0731 20:59:23.175012  188656 ssh_runner.go:195] Run: systemctl --version
	I0731 20:59:23.200192  188656 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:59:23.348227  188656 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:59:23.355109  188656 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:59:23.355195  188656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:59:23.371683  188656 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:59:23.371707  188656 start.go:495] detecting cgroup driver to use...
	I0731 20:59:23.371786  188656 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:59:23.388727  188656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:59:23.408830  188656 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:59:23.408907  188656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:59:23.423594  188656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:59:23.437876  188656 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:59:23.559105  188656 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:59:23.743186  188656 docker.go:233] disabling docker service ...
	I0731 20:59:23.743253  188656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:59:23.758053  188656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:59:23.779951  188656 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:59:20.089173  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:22.092138  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:23.919494  188656 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:59:24.057230  188656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:59:24.072687  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:59:24.094528  188656 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 20:59:24.094600  188656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:24.106579  188656 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:59:24.106634  188656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:24.120079  188656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:24.130759  188656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:24.142925  188656 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:59:24.154760  188656 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:59:24.165059  188656 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:59:24.165113  188656 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:59:24.179567  188656 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:59:24.191838  188656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:24.339078  188656 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:59:24.515723  188656 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:59:24.515810  188656 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:59:24.521882  188656 start.go:563] Will wait 60s for crictl version
	I0731 20:59:24.521966  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:24.527655  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:59:24.581055  188656 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:59:24.581151  188656 ssh_runner.go:195] Run: crio --version
	I0731 20:59:24.623207  188656 ssh_runner.go:195] Run: crio --version
	I0731 20:59:24.662956  188656 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0731 20:59:22.780311  188266 node_ready.go:53] node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:23.281324  188266 node_ready.go:49] node "default-k8s-diff-port-125614" has status "Ready":"True"
	I0731 20:59:23.281373  188266 node_ready.go:38] duration metric: took 7.005540469s for node "default-k8s-diff-port-125614" to be "Ready" ...
	I0731 20:59:23.281387  188266 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:59:23.291207  188266 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.299173  188266 pod_ready.go:92] pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:23.299202  188266 pod_ready.go:81] duration metric: took 7.971632ms for pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.299215  188266 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.307561  188266 pod_ready.go:92] pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:23.307580  188266 pod_ready.go:81] duration metric: took 8.357239ms for pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.307589  188266 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.314466  188266 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:23.314544  188266 pod_ready.go:81] duration metric: took 6.946044ms for pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.314565  188266 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:25.323341  188266 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:23.108292  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Start
	I0731 20:59:23.108473  187862 main.go:141] libmachine: (embed-certs-831240) Ensuring networks are active...
	I0731 20:59:23.109160  187862 main.go:141] libmachine: (embed-certs-831240) Ensuring network default is active
	I0731 20:59:23.109575  187862 main.go:141] libmachine: (embed-certs-831240) Ensuring network mk-embed-certs-831240 is active
	I0731 20:59:23.110032  187862 main.go:141] libmachine: (embed-certs-831240) Getting domain xml...
	I0731 20:59:23.110762  187862 main.go:141] libmachine: (embed-certs-831240) Creating domain...
	I0731 20:59:24.457926  187862 main.go:141] libmachine: (embed-certs-831240) Waiting to get IP...
	I0731 20:59:24.458936  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:24.459381  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:24.459477  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:24.459375  189758 retry.go:31] will retry after 266.695372ms: waiting for machine to come up
	I0731 20:59:24.727938  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:24.728394  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:24.728532  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:24.728451  189758 retry.go:31] will retry after 349.84093ms: waiting for machine to come up
	I0731 20:59:25.080044  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:25.080634  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:25.080668  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:25.080592  189758 retry.go:31] will retry after 324.555122ms: waiting for machine to come up
	I0731 20:59:25.407332  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:25.407852  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:25.407877  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:25.407795  189758 retry.go:31] will retry after 580.815897ms: waiting for machine to come up
	I0731 20:59:25.990957  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:25.991551  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:25.991578  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:25.991468  189758 retry.go:31] will retry after 570.045476ms: waiting for machine to come up
	I0731 20:59:26.563493  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:26.563901  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:26.563931  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:26.563853  189758 retry.go:31] will retry after 582.597352ms: waiting for machine to come up
	I0731 20:59:27.148256  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:27.148744  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:27.148773  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:27.148688  189758 retry.go:31] will retry after 1.105713474s: waiting for machine to come up
	I0731 20:59:24.664851  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetIP
	I0731 20:59:24.668464  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:24.668842  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:24.668869  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:24.669103  188656 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0731 20:59:24.674448  188656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:24.690857  188656 kubeadm.go:883] updating cluster {Name:old-k8s-version-239115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-239115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:59:24.691011  188656 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 20:59:24.691056  188656 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:24.744259  188656 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 20:59:24.744348  188656 ssh_runner.go:195] Run: which lz4
	I0731 20:59:24.749358  188656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 20:59:24.754299  188656 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 20:59:24.754341  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0731 20:59:26.551495  188656 crio.go:462] duration metric: took 1.802206904s to copy over tarball
	I0731 20:59:26.551571  188656 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 20:59:24.589677  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:26.591079  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:29.089923  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:25.824008  188266 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:25.824037  188266 pod_ready.go:81] duration metric: took 2.509461823s for pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:25.824052  188266 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-csdc4" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:25.840569  188266 pod_ready.go:92] pod "kube-proxy-csdc4" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:25.840595  188266 pod_ready.go:81] duration metric: took 16.533543ms for pod "kube-proxy-csdc4" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:25.840613  188266 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:26.103726  188266 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:26.103759  188266 pod_ready.go:81] duration metric: took 263.1364ms for pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:26.103774  188266 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:28.112583  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:30.610462  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:28.255818  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:28.256478  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:28.256506  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:28.256408  189758 retry.go:31] will retry after 1.3552249s: waiting for machine to come up
	I0731 20:59:29.613070  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:29.613661  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:29.613693  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:29.613620  189758 retry.go:31] will retry after 1.522319436s: waiting for machine to come up
	I0731 20:59:31.138020  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:31.138490  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:31.138522  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:31.138434  189758 retry.go:31] will retry after 1.573723862s: waiting for machine to come up
	I0731 20:59:29.653941  188656 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.102337952s)
	I0731 20:59:29.653974  188656 crio.go:469] duration metric: took 3.102444338s to extract the tarball
	I0731 20:59:29.653982  188656 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 20:59:29.704065  188656 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:29.745966  188656 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 20:59:29.746010  188656 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 20:59:29.746076  188656 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:59:29.746107  188656 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:29.746129  188656 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:29.746149  188656 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0731 20:59:29.746170  188656 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0731 20:59:29.746410  188656 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:29.746423  188656 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:29.746735  188656 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:29.747951  188656 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 20:59:29.747978  188656 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0731 20:59:29.747978  188656 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:29.747998  188656 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:29.748005  188656 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:29.747951  188656 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:59:29.748021  188656 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:29.748091  188656 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:29.915865  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:29.918049  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:29.950840  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:29.952762  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:29.956317  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:29.959905  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0731 20:59:30.000707  188656 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0731 20:59:30.000768  188656 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:30.000821  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.007207  188656 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0731 20:59:30.007251  188656 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:30.007294  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.016613  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0731 20:59:30.082306  188656 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0731 20:59:30.082358  188656 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:30.082364  188656 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0731 20:59:30.082414  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.082418  188656 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:30.082557  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.089299  188656 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0731 20:59:30.089382  188656 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:30.089427  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.105150  188656 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0731 20:59:30.105217  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:30.105246  188656 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0731 20:59:30.105264  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:30.105282  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.129702  188656 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0731 20:59:30.129748  188656 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0731 20:59:30.129779  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:30.129826  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:30.129853  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:30.129800  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.188192  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 20:59:30.188243  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0731 20:59:30.188342  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0731 20:59:30.188365  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0731 20:59:30.268231  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0731 20:59:30.268296  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0731 20:59:30.268337  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0731 20:59:30.287822  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0731 20:59:30.287929  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0731 20:59:30.635440  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:59:30.776879  188656 cache_images.go:92] duration metric: took 1.030849977s to LoadCachedImages
	W0731 20:59:30.777006  188656 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0731 20:59:30.777028  188656 kubeadm.go:934] updating node { 192.168.61.51 8443 v1.20.0 crio true true} ...
	I0731 20:59:30.777175  188656 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-239115 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:59:30.777284  188656 ssh_runner.go:195] Run: crio config
	I0731 20:59:30.832542  188656 cni.go:84] Creating CNI manager for ""
	I0731 20:59:30.832570  188656 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:30.832586  188656 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:59:30.832618  188656 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-239115 NodeName:old-k8s-version-239115 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 20:59:30.832798  188656 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-239115"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:59:30.832877  188656 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0731 20:59:30.842909  188656 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:59:30.842995  188656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 20:59:30.852951  188656 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0731 20:59:30.872643  188656 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:59:30.889851  188656 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0731 20:59:30.910958  188656 ssh_runner.go:195] Run: grep 192.168.61.51	control-plane.minikube.internal$ /etc/hosts
	I0731 20:59:30.915645  188656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:30.928698  188656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:31.055628  188656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:59:31.076731  188656 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115 for IP: 192.168.61.51
	I0731 20:59:31.076759  188656 certs.go:194] generating shared ca certs ...
	I0731 20:59:31.076789  188656 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:31.076979  188656 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 20:59:31.077041  188656 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 20:59:31.077057  188656 certs.go:256] generating profile certs ...
	I0731 20:59:31.077175  188656 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/client.key
	I0731 20:59:31.077378  188656 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/apiserver.key.072d7f83
	I0731 20:59:31.077514  188656 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/proxy-client.key
	I0731 20:59:31.077704  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 20:59:31.077789  188656 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 20:59:31.077806  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 20:59:31.077854  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:59:31.077892  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:59:31.077932  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 20:59:31.077997  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:31.078906  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:59:31.126980  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:59:31.167327  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:59:31.211947  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 20:59:31.258307  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 20:59:31.296628  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 20:59:31.342330  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:59:31.391114  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 20:59:31.415097  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 20:59:31.442595  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 20:59:31.472160  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:59:31.497814  188656 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:59:31.515890  188656 ssh_runner.go:195] Run: openssl version
	I0731 20:59:31.523423  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:59:31.537984  188656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:31.544161  188656 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:31.544225  188656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:31.552590  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:59:31.567190  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 20:59:31.581206  188656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 20:59:31.586903  188656 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 20:59:31.586966  188656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 20:59:31.593485  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 20:59:31.606764  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 20:59:31.619748  188656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 20:59:31.624599  188656 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 20:59:31.624681  188656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 20:59:31.631293  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:59:31.642823  188656 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:59:31.647273  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 20:59:31.653142  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 20:59:31.659046  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 20:59:31.665552  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 20:59:31.671454  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 20:59:31.677426  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 20:59:31.683490  188656 kubeadm.go:392] StartCluster: {Name:old-k8s-version-239115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-239115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:59:31.683586  188656 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:59:31.683625  188656 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:31.725466  188656 cri.go:89] found id: ""
	I0731 20:59:31.725548  188656 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 20:59:31.737025  188656 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 20:59:31.737050  188656 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 20:59:31.737113  188656 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 20:59:31.747325  188656 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:59:31.748325  188656 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-239115" does not appear in /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 20:59:31.748965  188656 kubeconfig.go:62] /home/jenkins/minikube-integration/19355-121704/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-239115" cluster setting kubeconfig missing "old-k8s-version-239115" context setting]
	I0731 20:59:31.749997  188656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:31.757569  188656 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 20:59:31.771188  188656 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.51
	I0731 20:59:31.771222  188656 kubeadm.go:1160] stopping kube-system containers ...
	I0731 20:59:31.771236  188656 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 20:59:31.771292  188656 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:31.811574  188656 cri.go:89] found id: ""
	I0731 20:59:31.811653  188656 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 20:59:31.829930  188656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:59:31.840145  188656 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:59:31.840165  188656 kubeadm.go:157] found existing configuration files:
	
	I0731 20:59:31.840206  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 20:59:31.851266  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:59:31.851340  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:59:31.861634  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 20:59:31.871532  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:59:31.871605  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:59:31.882164  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 20:59:31.892222  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:59:31.892291  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:59:31.903299  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 20:59:31.916163  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:59:31.916235  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:59:31.929423  188656 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 20:59:31.942668  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:32.107220  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:32.953249  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:33.207806  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:33.307640  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:33.410338  188656 api_server.go:52] waiting for apiserver process to appear ...
	I0731 20:59:33.410444  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:31.221009  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:33.589275  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:32.612024  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:35.109601  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:32.713632  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:32.714137  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:32.714169  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:32.714064  189758 retry.go:31] will retry after 2.013485748s: waiting for machine to come up
	I0731 20:59:34.729625  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:34.730006  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:34.730070  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:34.729970  189758 retry.go:31] will retry after 2.193072749s: waiting for machine to come up
	I0731 20:59:36.924345  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:36.924990  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:36.925008  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:36.924940  189758 retry.go:31] will retry after 3.394781674s: waiting for machine to come up
	I0731 20:59:33.910958  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:34.411011  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:34.911110  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:35.410715  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:35.911117  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:36.410825  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:36.911311  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:37.410757  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:37.910786  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:38.410821  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:36.089622  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:38.589435  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:37.110446  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:39.111323  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:40.322463  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:40.322827  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:40.322857  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:40.322774  189758 retry.go:31] will retry after 3.836613891s: waiting for machine to come up
	I0731 20:59:38.910891  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:39.411547  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:39.911260  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:40.411404  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:40.910719  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:41.411449  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:41.910643  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:42.410967  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:42.910703  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:43.411187  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:41.088768  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:43.589256  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:41.609891  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:44.111379  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:44.160516  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.161009  187862 main.go:141] libmachine: (embed-certs-831240) Found IP for machine: 192.168.39.92
	I0731 20:59:44.161029  187862 main.go:141] libmachine: (embed-certs-831240) Reserving static IP address...
	I0731 20:59:44.161041  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has current primary IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.161561  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "embed-certs-831240", mac: "52:54:00:ff:69:a6", ip: "192.168.39.92"} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.161594  187862 main.go:141] libmachine: (embed-certs-831240) DBG | skip adding static IP to network mk-embed-certs-831240 - found existing host DHCP lease matching {name: "embed-certs-831240", mac: "52:54:00:ff:69:a6", ip: "192.168.39.92"}
	I0731 20:59:44.161609  187862 main.go:141] libmachine: (embed-certs-831240) Reserved static IP address: 192.168.39.92
	I0731 20:59:44.161623  187862 main.go:141] libmachine: (embed-certs-831240) Waiting for SSH to be available...
	I0731 20:59:44.161638  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Getting to WaitForSSH function...
	I0731 20:59:44.163936  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.164285  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.164318  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.164447  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Using SSH client type: external
	I0731 20:59:44.164479  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa (-rw-------)
	I0731 20:59:44.164499  187862 main.go:141] libmachine: (embed-certs-831240) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.92 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:59:44.164510  187862 main.go:141] libmachine: (embed-certs-831240) DBG | About to run SSH command:
	I0731 20:59:44.164544  187862 main.go:141] libmachine: (embed-certs-831240) DBG | exit 0
	I0731 20:59:44.293463  187862 main.go:141] libmachine: (embed-certs-831240) DBG | SSH cmd err, output: <nil>: 
	I0731 20:59:44.293819  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetConfigRaw
	I0731 20:59:44.294490  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetIP
	I0731 20:59:44.296982  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.297351  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.297381  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.297634  187862 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/config.json ...
	I0731 20:59:44.297877  187862 machine.go:94] provisionDockerMachine start ...
	I0731 20:59:44.297897  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:44.298116  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.300452  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.300806  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.300829  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.300953  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:44.301146  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.301308  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.301439  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:44.301634  187862 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:44.301811  187862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0731 20:59:44.301823  187862 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 20:59:44.418065  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 20:59:44.418105  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetMachineName
	I0731 20:59:44.418428  187862 buildroot.go:166] provisioning hostname "embed-certs-831240"
	I0731 20:59:44.418446  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetMachineName
	I0731 20:59:44.418666  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.421984  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.422403  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.422434  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.422568  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:44.422733  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.422893  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.423023  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:44.423208  187862 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:44.423371  187862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0731 20:59:44.423410  187862 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-831240 && echo "embed-certs-831240" | sudo tee /etc/hostname
	I0731 20:59:44.549670  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-831240
	
	I0731 20:59:44.549697  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.552503  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.552851  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.552876  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.553017  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:44.553200  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.553398  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.553533  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:44.553721  187862 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:44.554012  187862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0731 20:59:44.554039  187862 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-831240' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-831240/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-831240' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:59:44.674662  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:59:44.674693  187862 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 20:59:44.674713  187862 buildroot.go:174] setting up certificates
	I0731 20:59:44.674723  187862 provision.go:84] configureAuth start
	I0731 20:59:44.674733  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetMachineName
	I0731 20:59:44.675011  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetIP
	I0731 20:59:44.677631  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.677911  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.677951  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.678081  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.679869  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.680177  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.680205  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.680332  187862 provision.go:143] copyHostCerts
	I0731 20:59:44.680391  187862 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 20:59:44.680401  187862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 20:59:44.680450  187862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 20:59:44.680537  187862 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 20:59:44.680545  187862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 20:59:44.680564  187862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 20:59:44.680628  187862 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 20:59:44.680635  187862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 20:59:44.680652  187862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 20:59:44.680711  187862 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.embed-certs-831240 san=[127.0.0.1 192.168.39.92 embed-certs-831240 localhost minikube]
	I0731 20:59:44.733872  187862 provision.go:177] copyRemoteCerts
	I0731 20:59:44.733927  187862 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:59:44.733951  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.736399  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.736731  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.736758  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.736935  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:44.737131  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.737273  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:44.737430  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 20:59:44.824050  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:59:44.847699  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0731 20:59:44.872138  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 20:59:44.896013  187862 provision.go:87] duration metric: took 221.275458ms to configureAuth
	I0731 20:59:44.896042  187862 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:59:44.896234  187862 config.go:182] Loaded profile config "embed-certs-831240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:59:44.896327  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.898820  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.899206  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.899232  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.899457  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:44.899660  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.899822  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.899993  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:44.900216  187862 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:44.900438  187862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0731 20:59:44.900462  187862 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:59:45.179165  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:59:45.179194  187862 machine.go:97] duration metric: took 881.302407ms to provisionDockerMachine
	I0731 20:59:45.179213  187862 start.go:293] postStartSetup for "embed-certs-831240" (driver="kvm2")
	I0731 20:59:45.179226  187862 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:59:45.179252  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:45.179615  187862 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:59:45.179646  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:45.182617  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.183047  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:45.183069  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.183284  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:45.183510  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:45.183654  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:45.183805  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 20:59:45.273492  187862 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:59:45.277593  187862 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:59:45.277618  187862 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 20:59:45.277687  187862 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 20:59:45.277782  187862 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 20:59:45.277889  187862 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:59:45.288172  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:45.311763  187862 start.go:296] duration metric: took 132.534326ms for postStartSetup
	I0731 20:59:45.311803  187862 fix.go:56] duration metric: took 22.228928797s for fixHost
	I0731 20:59:45.311827  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:45.314578  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.314962  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:45.314998  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.315146  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:45.315381  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:45.315549  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:45.315681  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:45.315868  187862 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:45.316035  187862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0731 20:59:45.316045  187862 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:59:45.426289  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722459585.381297707
	
	I0731 20:59:45.426314  187862 fix.go:216] guest clock: 1722459585.381297707
	I0731 20:59:45.426324  187862 fix.go:229] Guest: 2024-07-31 20:59:45.381297707 +0000 UTC Remote: 2024-07-31 20:59:45.311808006 +0000 UTC m=+363.090091892 (delta=69.489701ms)
	I0731 20:59:45.426379  187862 fix.go:200] guest clock delta is within tolerance: 69.489701ms
	I0731 20:59:45.426387  187862 start.go:83] releasing machines lock for "embed-certs-831240", held for 22.343543995s
	I0731 20:59:45.426419  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:45.426684  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetIP
	I0731 20:59:45.429330  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.429757  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:45.429785  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.429952  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:45.430453  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:45.430671  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:45.430790  187862 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:59:45.430854  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:45.430905  187862 ssh_runner.go:195] Run: cat /version.json
	I0731 20:59:45.430943  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:45.433850  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.434108  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.434192  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:45.434222  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.434385  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:45.434580  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:45.434584  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:45.434611  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.434760  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:45.434768  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:45.434939  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:45.434929  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 20:59:45.435099  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:45.435243  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 20:59:45.542122  187862 ssh_runner.go:195] Run: systemctl --version
	I0731 20:59:45.548583  187862 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:59:45.690235  187862 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:59:45.696897  187862 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:59:45.696986  187862 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:59:45.714456  187862 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:59:45.714480  187862 start.go:495] detecting cgroup driver to use...
	I0731 20:59:45.714546  187862 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:59:45.732184  187862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:59:45.747047  187862 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:59:45.747104  187862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:59:45.761152  187862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:59:45.775267  187862 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:59:45.890891  187862 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:59:46.043503  187862 docker.go:233] disabling docker service ...
	I0731 20:59:46.043577  187862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:59:46.058174  187862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:59:46.070900  187862 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:59:46.209527  187862 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:59:46.343868  187862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:59:46.357583  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:59:46.375819  187862 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 20:59:46.375875  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.386762  187862 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:59:46.386844  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.397495  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.407654  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.418326  187862 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:59:46.428983  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.439530  187862 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.457956  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.468003  187862 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:59:46.477332  187862 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:59:46.477400  187862 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:59:46.490886  187862 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:59:46.500516  187862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:46.617952  187862 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:59:46.761978  187862 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:59:46.762088  187862 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:59:46.767210  187862 start.go:563] Will wait 60s for crictl version
	I0731 20:59:46.767275  187862 ssh_runner.go:195] Run: which crictl
	I0731 20:59:46.771502  187862 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:59:46.810894  187862 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:59:46.810976  187862 ssh_runner.go:195] Run: crio --version
	I0731 20:59:46.839234  187862 ssh_runner.go:195] Run: crio --version
	I0731 20:59:46.871209  187862 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 20:59:46.872648  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetIP
	I0731 20:59:46.875374  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:46.875683  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:46.875698  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:46.875900  187862 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 20:59:46.880402  187862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:46.894098  187862 kubeadm.go:883] updating cluster {Name:embed-certs-831240 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-831240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:59:46.894238  187862 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:59:46.894300  187862 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:46.937003  187862 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 20:59:46.937079  187862 ssh_runner.go:195] Run: which lz4
	I0731 20:59:46.941158  187862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 20:59:46.945395  187862 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 20:59:46.945425  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 20:59:43.910997  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:44.410783  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:44.911365  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:45.410690  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:45.911150  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:46.411384  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:46.910579  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:47.411171  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:47.910578  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:48.411377  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:45.589690  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:47.591464  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:46.608955  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:48.611634  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:50.615557  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:48.414703  187862 crio.go:462] duration metric: took 1.473569222s to copy over tarball
	I0731 20:59:48.414789  187862 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 20:59:50.666750  187862 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.251926888s)
	I0731 20:59:50.666783  187862 crio.go:469] duration metric: took 2.252043688s to extract the tarball
	I0731 20:59:50.666793  187862 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 20:59:50.707188  187862 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:50.749781  187862 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 20:59:50.749808  187862 cache_images.go:84] Images are preloaded, skipping loading
	I0731 20:59:50.749817  187862 kubeadm.go:934] updating node { 192.168.39.92 8443 v1.30.3 crio true true} ...
	I0731 20:59:50.749923  187862 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-831240 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-831240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:59:50.749998  187862 ssh_runner.go:195] Run: crio config
	I0731 20:59:50.797191  187862 cni.go:84] Creating CNI manager for ""
	I0731 20:59:50.797214  187862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:50.797227  187862 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:59:50.797253  187862 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.92 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-831240 NodeName:embed-certs-831240 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.92"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.92 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 20:59:50.797484  187862 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.92
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-831240"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.92
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.92"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:59:50.797556  187862 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 20:59:50.808170  187862 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:59:50.808236  187862 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 20:59:50.817847  187862 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0731 20:59:50.834107  187862 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:59:50.849722  187862 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0731 20:59:50.866599  187862 ssh_runner.go:195] Run: grep 192.168.39.92	control-plane.minikube.internal$ /etc/hosts
	I0731 20:59:50.870727  187862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.92	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:50.884490  187862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:51.043488  187862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:59:51.064792  187862 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240 for IP: 192.168.39.92
	I0731 20:59:51.064816  187862 certs.go:194] generating shared ca certs ...
	I0731 20:59:51.064836  187862 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:51.065142  187862 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 20:59:51.065225  187862 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 20:59:51.065254  187862 certs.go:256] generating profile certs ...
	I0731 20:59:51.065443  187862 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/client.key
	I0731 20:59:51.065571  187862 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/apiserver.key.4e545c52
	I0731 20:59:51.065639  187862 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/proxy-client.key
	I0731 20:59:51.065798  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 20:59:51.065846  187862 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 20:59:51.065857  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 20:59:51.065883  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:59:51.065909  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:59:51.065929  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 20:59:51.065971  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:51.066633  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:59:51.107287  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:59:51.138745  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:59:51.176139  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 20:59:51.211344  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0731 20:59:51.241050  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 20:59:51.269307  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:59:51.293184  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 20:59:51.316745  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:59:51.343620  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 20:59:51.367293  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 20:59:51.391789  187862 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:59:51.413821  187862 ssh_runner.go:195] Run: openssl version
	I0731 20:59:51.420455  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 20:59:51.431721  187862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 20:59:51.436672  187862 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 20:59:51.436724  187862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 20:59:51.442604  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 20:59:51.453601  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 20:59:51.464109  187862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 20:59:51.468598  187862 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 20:59:51.468648  187862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 20:59:51.474333  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:59:51.484758  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:59:51.495093  187862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:51.499557  187862 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:51.499605  187862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:51.505244  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:59:51.515545  187862 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:59:51.519923  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 20:59:51.525696  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 20:59:51.531430  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 20:59:51.537082  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 20:59:51.542713  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 20:59:51.548206  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 20:59:51.553705  187862 kubeadm.go:392] StartCluster: {Name:embed-certs-831240 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-831240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:59:51.553793  187862 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:59:51.553841  187862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:51.592396  187862 cri.go:89] found id: ""
	I0731 20:59:51.592472  187862 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 20:59:51.602510  187862 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 20:59:51.602528  187862 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 20:59:51.602578  187862 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 20:59:51.612384  187862 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:59:51.613530  187862 kubeconfig.go:125] found "embed-certs-831240" server: "https://192.168.39.92:8443"
	I0731 20:59:51.615991  187862 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 20:59:51.625205  187862 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.92
	I0731 20:59:51.625239  187862 kubeadm.go:1160] stopping kube-system containers ...
	I0731 20:59:51.625253  187862 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 20:59:51.625307  187862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:51.663278  187862 cri.go:89] found id: ""
	I0731 20:59:51.663370  187862 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 20:59:51.678876  187862 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:59:51.688071  187862 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:59:51.688092  187862 kubeadm.go:157] found existing configuration files:
	
	I0731 20:59:51.688139  187862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 20:59:51.696441  187862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:59:51.696494  187862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:59:51.705310  187862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 20:59:51.713545  187862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:59:51.713599  187862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:59:51.723512  187862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 20:59:51.732304  187862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:59:51.732380  187862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:59:51.741301  187862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 20:59:51.749537  187862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:59:51.749583  187862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:59:51.758609  187862 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 20:59:51.774450  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:51.888916  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:48.910784  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:49.411137  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:49.911453  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:50.411128  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:50.911431  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:51.410483  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:51.910975  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:52.411519  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:52.911079  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:53.410802  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:50.094603  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:52.589951  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:53.424691  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:55.609675  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:52.666705  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:52.899759  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:52.975806  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:53.050422  187862 api_server.go:52] waiting for apiserver process to appear ...
	I0731 20:59:53.050493  187862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:53.551073  187862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:54.051427  187862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:54.551268  187862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:54.570361  187862 api_server.go:72] duration metric: took 1.519937245s to wait for apiserver process to appear ...
	I0731 20:59:54.570389  187862 api_server.go:88] waiting for apiserver healthz status ...
	I0731 20:59:54.570414  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:53.911405  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:54.410870  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:54.911330  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:55.411491  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:55.911380  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:56.411483  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:56.910602  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:57.411228  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:57.910486  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:58.411198  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:57.260421  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:57.260455  187862 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:57.260469  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:57.284265  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:57.284301  187862 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:57.570976  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:57.575616  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:57.575644  187862 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:58.071247  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:58.075871  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:58.075903  187862 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:58.570906  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:58.581990  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:58.582038  187862 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:59.070528  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:59.074787  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 200:
	ok
	I0731 20:59:59.081502  187862 api_server.go:141] control plane version: v1.30.3
	I0731 20:59:59.081541  187862 api_server.go:131] duration metric: took 4.511132973s to wait for apiserver health ...
	I0731 20:59:59.081552  187862 cni.go:84] Creating CNI manager for ""
	I0731 20:59:59.081561  187862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:59.083504  187862 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 20:59:55.089279  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:57.589380  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:59.084894  187862 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 20:59:59.098139  187862 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 20:59:59.118458  187862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 20:59:59.128022  187862 system_pods.go:59] 8 kube-system pods found
	I0731 20:59:59.128061  187862 system_pods.go:61] "coredns-7db6d8ff4d-2ks55" [f5ad9d76-5cdc-430e-8933-7e72a2dda95f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 20:59:59.128071  187862 system_pods.go:61] "etcd-embed-certs-831240" [5236ad06-90d9-48f1-964a-efa8f56ee8b5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 20:59:59.128082  187862 system_pods.go:61] "kube-apiserver-embed-certs-831240" [06290f48-0a7b-4d88-9e61-af4c7dde9acd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 20:59:59.128100  187862 system_pods.go:61] "kube-controller-manager-embed-certs-831240" [5d669fab-16cd-4bc6-a880-a1ecfb5d55b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 20:59:59.128113  187862 system_pods.go:61] "kube-proxy-x662j" [9ad0d8a8-94b4-4f3e-b5da-4e5585c28d21] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 20:59:59.128121  187862 system_pods.go:61] "kube-scheduler-embed-certs-831240" [cf9c5922-6468-46e7-84c1-1841f5bc3446] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 20:59:59.128134  187862 system_pods.go:61] "metrics-server-569cc877fc-slbkm" [f93f674b-1f0e-443b-ac06-9c2a5234eeea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 20:59:59.128145  187862 system_pods.go:61] "storage-provisioner" [d3d5fa24-96e8-4ab5-9887-62ff8b82f21d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 20:59:59.128156  187862 system_pods.go:74] duration metric: took 9.673815ms to wait for pod list to return data ...
	I0731 20:59:59.128168  187862 node_conditions.go:102] verifying NodePressure condition ...
	I0731 20:59:59.131825  187862 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 20:59:59.131853  187862 node_conditions.go:123] node cpu capacity is 2
	I0731 20:59:59.131865  187862 node_conditions.go:105] duration metric: took 3.691724ms to run NodePressure ...
	I0731 20:59:59.131897  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:59.494923  187862 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 20:59:59.501848  187862 kubeadm.go:739] kubelet initialised
	I0731 20:59:59.501875  187862 kubeadm.go:740] duration metric: took 6.920816ms waiting for restarted kubelet to initialise ...
	I0731 20:59:59.501885  187862 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:59:59.510503  187862 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:59.518204  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.518234  187862 pod_ready.go:81] duration metric: took 7.702873ms for pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:59.518247  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.518263  187862 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:59.523236  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "etcd-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.523258  187862 pod_ready.go:81] duration metric: took 4.985299ms for pod "etcd-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:59.523266  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "etcd-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.523275  187862 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:59.535237  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.535256  187862 pod_ready.go:81] duration metric: took 11.97449ms for pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:59.535270  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.535275  187862 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:59.541512  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.541531  187862 pod_ready.go:81] duration metric: took 6.24797ms for pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:59.541539  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.541545  187862 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-x662j" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:59.922722  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "kube-proxy-x662j" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.922757  187862 pod_ready.go:81] duration metric: took 381.203526ms for pod "kube-proxy-x662j" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:59.922771  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "kube-proxy-x662j" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.922779  187862 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:00.322049  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:00.322077  187862 pod_ready.go:81] duration metric: took 399.289505ms for pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	E0731 21:00:00.322088  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:00.322094  187862 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:00.722961  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:00.722993  187862 pod_ready.go:81] duration metric: took 400.88956ms for pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace to be "Ready" ...
	E0731 21:00:00.723008  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:00.723017  187862 pod_ready.go:38] duration metric: took 1.221112347s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:00:00.723050  187862 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:00:00.735642  187862 ops.go:34] apiserver oom_adj: -16
	I0731 21:00:00.735697  187862 kubeadm.go:597] duration metric: took 9.133136671s to restartPrimaryControlPlane
	I0731 21:00:00.735735  187862 kubeadm.go:394] duration metric: took 9.182030801s to StartCluster
	I0731 21:00:00.735764  187862 settings.go:142] acquiring lock: {Name:mk1f43113b3e416d694056e4890ac819b020378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:00:00.735860  187862 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 21:00:00.737955  187862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:00:00.738247  187862 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:00:00.738329  187862 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:00:00.738418  187862 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-831240"
	I0731 21:00:00.738432  187862 addons.go:69] Setting default-storageclass=true in profile "embed-certs-831240"
	I0731 21:00:00.738463  187862 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-831240"
	W0731 21:00:00.738475  187862 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:00:00.738481  187862 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-831240"
	I0731 21:00:00.738513  187862 host.go:66] Checking if "embed-certs-831240" exists ...
	I0731 21:00:00.738547  187862 config.go:182] Loaded profile config "embed-certs-831240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:00:00.738581  187862 addons.go:69] Setting metrics-server=true in profile "embed-certs-831240"
	I0731 21:00:00.738651  187862 addons.go:234] Setting addon metrics-server=true in "embed-certs-831240"
	W0731 21:00:00.738666  187862 addons.go:243] addon metrics-server should already be in state true
	I0731 21:00:00.738735  187862 host.go:66] Checking if "embed-certs-831240" exists ...
	I0731 21:00:00.738818  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.738858  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.738897  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.738960  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.739144  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.739190  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.740244  187862 out.go:177] * Verifying Kubernetes components...
	I0731 21:00:00.746003  187862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:00:00.755735  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35867
	I0731 21:00:00.755773  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38437
	I0731 21:00:00.756268  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.756271  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.756594  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35939
	I0731 21:00:00.756820  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.756847  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.756892  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.756917  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.757069  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.757228  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.757254  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.757458  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetState
	I0731 21:00:00.757638  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.757668  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.757745  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.757774  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.758005  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.758543  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.758586  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.761553  187862 addons.go:234] Setting addon default-storageclass=true in "embed-certs-831240"
	W0731 21:00:00.761587  187862 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:00:00.761618  187862 host.go:66] Checking if "embed-certs-831240" exists ...
	I0731 21:00:00.762018  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.762070  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.775492  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42385
	I0731 21:00:00.776091  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.776712  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.776743  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.776760  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35295
	I0731 21:00:00.777245  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.777402  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.777513  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetState
	I0731 21:00:00.777920  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.777945  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.778185  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32907
	I0731 21:00:00.778393  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.778603  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetState
	I0731 21:00:00.778687  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.779223  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.779243  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.779665  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.779718  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 21:00:00.780231  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.780274  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.780612  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 21:00:00.781947  187862 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:00:00.782994  187862 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 20:59:58.110503  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:00.112109  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:00.784194  187862 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:00:00.784216  187862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:00:00.784240  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 21:00:00.784937  187862 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:00:00.784958  187862 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:00:00.784984  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 21:00:00.788544  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.788947  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 21:00:00.788970  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.789003  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.789127  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 21:00:00.789389  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 21:00:00.789521  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 21:00:00.789548  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.789571  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 21:00:00.789773  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 21:00:00.790126  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 21:00:00.790324  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 21:00:00.790502  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 21:00:00.790663  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 21:00:00.799024  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35647
	I0731 21:00:00.799718  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.800341  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.800360  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.800967  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.801258  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetState
	I0731 21:00:00.803078  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 21:00:00.803555  187862 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:00:00.803571  187862 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:00:00.803591  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 21:00:00.809363  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 21:00:00.809461  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.809492  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 21:00:00.809512  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.809680  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 21:00:00.809858  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 21:00:00.810032  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 21:00:00.933963  187862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:00:00.953572  187862 node_ready.go:35] waiting up to 6m0s for node "embed-certs-831240" to be "Ready" ...
	I0731 21:00:01.036486  187862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:00:01.040636  187862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:00:01.040658  187862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 21:00:01.063384  187862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:00:01.068645  187862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:00:01.068675  187862 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:00:01.090838  187862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:00:01.090861  187862 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:00:01.113173  187862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:00:02.099966  187862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.063427097s)
	I0731 21:00:02.100021  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.100035  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.100080  187862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.036657274s)
	I0731 21:00:02.100129  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.100146  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.100338  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.100441  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.100452  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.100461  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.100580  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.100605  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.100615  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.100623  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.100698  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.100709  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Closing plugin on server side
	I0731 21:00:02.100723  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.100866  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.100875  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Closing plugin on server side
	I0731 21:00:02.100882  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.107654  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.107688  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.107952  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.107968  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.108003  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Closing plugin on server side
	I0731 21:00:02.140031  187862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.026799248s)
	I0731 21:00:02.140100  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.140116  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.140424  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Closing plugin on server side
	I0731 21:00:02.140455  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.140470  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.140482  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.140494  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.140772  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Closing plugin on server side
	I0731 21:00:02.140800  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.140808  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.140817  187862 addons.go:475] Verifying addon metrics-server=true in "embed-certs-831240"
	I0731 21:00:02.142583  187862 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 21:00:02.143787  187862 addons.go:510] duration metric: took 1.405477731s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 20:59:58.910774  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:59.410697  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:59.911233  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:00.411170  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:00.911416  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:01.410979  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:01.911444  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:02.411537  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:02.911216  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:03.411386  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:00.089186  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:02.588315  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:02.610109  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:04.610324  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:02.958162  187862 node_ready.go:53] node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:05.458997  187862 node_ready.go:53] node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:03.910942  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:04.411505  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:04.911485  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:05.410763  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:05.910937  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:06.411216  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:06.910743  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:07.410941  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:07.910922  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:08.410593  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:04.589597  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:07.089475  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:09.090023  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:06.610390  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:09.110758  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:07.958154  187862 node_ready.go:49] node "embed-certs-831240" has status "Ready":"True"
	I0731 21:00:07.958180  187862 node_ready.go:38] duration metric: took 7.004576791s for node "embed-certs-831240" to be "Ready" ...
	I0731 21:00:07.958191  187862 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:00:07.969639  187862 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:07.974704  187862 pod_ready.go:92] pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:07.974733  187862 pod_ready.go:81] duration metric: took 5.064645ms for pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:07.974745  187862 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:09.980566  187862 pod_ready.go:102] pod "etcd-embed-certs-831240" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:10.480476  187862 pod_ready.go:92] pod "etcd-embed-certs-831240" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:10.480501  187862 pod_ready.go:81] duration metric: took 2.505748029s for pod "etcd-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:10.480511  187862 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:10.485850  187862 pod_ready.go:92] pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:10.485873  187862 pod_ready.go:81] duration metric: took 5.353478ms for pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:10.485883  187862 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:08.910788  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:09.410807  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:09.911286  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:10.411372  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:10.910748  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:11.411253  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:11.910807  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:12.411208  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:12.910887  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:13.411318  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:11.589454  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:14.090483  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:11.610842  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:14.110306  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:12.492346  187862 pod_ready.go:102] pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:13.991859  187862 pod_ready.go:92] pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:13.991884  187862 pod_ready.go:81] duration metric: took 3.505993775s for pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:13.991893  187862 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x662j" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:13.997932  187862 pod_ready.go:92] pod "kube-proxy-x662j" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:13.997961  187862 pod_ready.go:81] duration metric: took 6.060225ms for pod "kube-proxy-x662j" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:13.997974  187862 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:14.007155  187862 pod_ready.go:92] pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:14.007178  187862 pod_ready.go:81] duration metric: took 9.197289ms for pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:14.007187  187862 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:16.013417  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:13.910943  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:14.410728  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:14.911343  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:15.410545  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:15.910560  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:16.411117  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:16.910537  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:17.410761  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:17.910796  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:18.411138  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:16.589010  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:18.589215  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:16.609886  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:18.610209  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:20.611613  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:18.013504  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:20.513116  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:18.911394  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:19.411098  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:19.910629  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:20.410698  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:20.910760  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:21.410503  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:21.910582  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:22.410724  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:22.910792  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:23.410961  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:21.089938  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:23.588082  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:23.109996  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:25.110361  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:22.514254  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:24.514729  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:27.013263  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:23.910510  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:24.410725  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:24.910807  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:25.411543  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:25.911473  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:26.410494  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:26.910519  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:27.410950  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:27.911528  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:28.411350  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:25.589873  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:27.590134  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:27.612311  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:30.110116  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:29.014386  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:31.014534  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:28.911371  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:29.411269  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:29.911465  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:30.410633  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:30.911166  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:31.411184  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:31.910806  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:32.410806  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:32.911125  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:33.410942  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:33.411021  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:33.461204  188656 cri.go:89] found id: ""
	I0731 21:00:33.461232  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.461241  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:33.461249  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:33.461313  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:33.500898  188656 cri.go:89] found id: ""
	I0731 21:00:33.500927  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.500937  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:33.500944  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:33.501010  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:33.536865  188656 cri.go:89] found id: ""
	I0731 21:00:33.536889  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.536902  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:33.536908  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:33.536957  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:33.578540  188656 cri.go:89] found id: ""
	I0731 21:00:33.578570  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.578582  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:33.578595  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:33.578686  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:33.616242  188656 cri.go:89] found id: ""
	I0731 21:00:33.616266  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.616276  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:33.616283  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:33.616345  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:33.650436  188656 cri.go:89] found id: ""
	I0731 21:00:33.650468  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.650479  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:33.650487  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:33.650552  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:33.687256  188656 cri.go:89] found id: ""
	I0731 21:00:33.687288  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.687300  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:33.687308  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:33.687365  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:33.720381  188656 cri.go:89] found id: ""
	I0731 21:00:33.720428  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.720440  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:33.720453  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:33.720469  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:33.772182  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:33.772226  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:33.787323  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:33.787359  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:00:30.089778  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:32.587877  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:32.110769  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:34.610418  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:33.514142  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:36.013676  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	W0731 21:00:33.907858  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:33.907878  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:33.907892  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:33.974118  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:33.974157  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:36.513427  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:36.527531  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:36.527588  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:36.567679  188656 cri.go:89] found id: ""
	I0731 21:00:36.567706  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.567714  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:36.567726  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:36.567786  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:36.608106  188656 cri.go:89] found id: ""
	I0731 21:00:36.608134  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.608145  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:36.608153  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:36.608215  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:36.651783  188656 cri.go:89] found id: ""
	I0731 21:00:36.651815  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.651824  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:36.651830  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:36.651892  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:36.686716  188656 cri.go:89] found id: ""
	I0731 21:00:36.686743  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.686751  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:36.686758  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:36.686823  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:36.721823  188656 cri.go:89] found id: ""
	I0731 21:00:36.721857  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.721865  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:36.721871  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:36.721939  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:36.758060  188656 cri.go:89] found id: ""
	I0731 21:00:36.758093  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.758103  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:36.758112  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:36.758173  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:36.801667  188656 cri.go:89] found id: ""
	I0731 21:00:36.801694  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.801704  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:36.801712  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:36.801776  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:36.845084  188656 cri.go:89] found id: ""
	I0731 21:00:36.845113  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.845124  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:36.845137  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:36.845152  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:36.897208  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:36.897248  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:36.910716  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:36.910750  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:36.987259  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:36.987285  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:36.987304  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:37.061109  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:37.061144  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:34.589416  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:36.592841  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:39.088346  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:36.611386  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:39.111149  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:38.516701  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:41.017409  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:39.600847  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:39.615897  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:39.615957  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:39.655390  188656 cri.go:89] found id: ""
	I0731 21:00:39.655417  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.655424  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:39.655430  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:39.655502  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:39.694180  188656 cri.go:89] found id: ""
	I0731 21:00:39.694213  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.694224  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:39.694231  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:39.694300  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:39.736752  188656 cri.go:89] found id: ""
	I0731 21:00:39.736783  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.736793  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:39.736801  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:39.736860  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:39.775685  188656 cri.go:89] found id: ""
	I0731 21:00:39.775770  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.775790  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:39.775802  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:39.775871  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:39.816790  188656 cri.go:89] found id: ""
	I0731 21:00:39.816820  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.816829  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:39.816835  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:39.816886  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:39.854931  188656 cri.go:89] found id: ""
	I0731 21:00:39.854963  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.854973  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:39.854981  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:39.855045  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:39.891039  188656 cri.go:89] found id: ""
	I0731 21:00:39.891066  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.891074  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:39.891083  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:39.891136  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:39.927434  188656 cri.go:89] found id: ""
	I0731 21:00:39.927463  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.927473  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:39.927483  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:39.927496  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:39.941240  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:39.941272  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:40.017212  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:40.017233  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:40.017246  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:40.094047  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:40.094081  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:40.138940  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:40.138966  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:42.690818  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:42.704855  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:42.704931  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:42.752315  188656 cri.go:89] found id: ""
	I0731 21:00:42.752347  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.752368  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:42.752376  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:42.752445  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:42.790060  188656 cri.go:89] found id: ""
	I0731 21:00:42.790090  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.790101  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:42.790109  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:42.790220  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:42.825504  188656 cri.go:89] found id: ""
	I0731 21:00:42.825532  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.825540  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:42.825547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:42.825598  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:42.860157  188656 cri.go:89] found id: ""
	I0731 21:00:42.860193  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.860204  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:42.860213  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:42.860286  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:42.902914  188656 cri.go:89] found id: ""
	I0731 21:00:42.902947  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.902959  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:42.902967  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:42.903036  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:42.950503  188656 cri.go:89] found id: ""
	I0731 21:00:42.950532  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.950541  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:42.950550  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:42.950603  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:43.010232  188656 cri.go:89] found id: ""
	I0731 21:00:43.010261  188656 logs.go:276] 0 containers: []
	W0731 21:00:43.010272  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:43.010280  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:43.010344  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:43.045487  188656 cri.go:89] found id: ""
	I0731 21:00:43.045517  188656 logs.go:276] 0 containers: []
	W0731 21:00:43.045527  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:43.045539  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:43.045556  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:43.123248  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:43.123279  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:43.123296  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:43.212230  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:43.212272  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:43.254595  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:43.254626  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:43.306187  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:43.306227  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:41.589806  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:44.088126  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:41.611786  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:44.109436  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:43.513500  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:45.514161  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:45.820246  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:45.835707  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:45.835786  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:45.872079  188656 cri.go:89] found id: ""
	I0731 21:00:45.872110  188656 logs.go:276] 0 containers: []
	W0731 21:00:45.872122  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:45.872130  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:45.872196  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:45.910637  188656 cri.go:89] found id: ""
	I0731 21:00:45.910664  188656 logs.go:276] 0 containers: []
	W0731 21:00:45.910672  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:45.910678  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:45.910740  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:45.945316  188656 cri.go:89] found id: ""
	I0731 21:00:45.945360  188656 logs.go:276] 0 containers: []
	W0731 21:00:45.945372  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:45.945380  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:45.945455  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:45.982015  188656 cri.go:89] found id: ""
	I0731 21:00:45.982046  188656 logs.go:276] 0 containers: []
	W0731 21:00:45.982057  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:45.982096  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:45.982165  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:46.017359  188656 cri.go:89] found id: ""
	I0731 21:00:46.017392  188656 logs.go:276] 0 containers: []
	W0731 21:00:46.017404  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:46.017412  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:46.017478  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:46.054401  188656 cri.go:89] found id: ""
	I0731 21:00:46.054431  188656 logs.go:276] 0 containers: []
	W0731 21:00:46.054447  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:46.054454  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:46.054507  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:46.092107  188656 cri.go:89] found id: ""
	I0731 21:00:46.092130  188656 logs.go:276] 0 containers: []
	W0731 21:00:46.092137  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:46.092143  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:46.092190  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:46.128613  188656 cri.go:89] found id: ""
	I0731 21:00:46.128642  188656 logs.go:276] 0 containers: []
	W0731 21:00:46.128652  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:46.128665  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:46.128679  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:46.144539  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:46.144570  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:46.219399  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:46.219433  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:46.219448  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:46.304486  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:46.304529  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:46.344087  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:46.344121  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:46.090543  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:48.090607  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:46.111072  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:48.610316  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:50.611553  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:48.014287  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:50.513252  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:48.894728  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:48.916610  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:48.916675  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:48.978515  188656 cri.go:89] found id: ""
	I0731 21:00:48.978543  188656 logs.go:276] 0 containers: []
	W0731 21:00:48.978550  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:48.978557  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:48.978615  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:49.026224  188656 cri.go:89] found id: ""
	I0731 21:00:49.026257  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.026268  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:49.026276  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:49.026354  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:49.064967  188656 cri.go:89] found id: ""
	I0731 21:00:49.064994  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.065003  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:49.065010  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:49.065070  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:49.101966  188656 cri.go:89] found id: ""
	I0731 21:00:49.101990  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.101999  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:49.102004  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:49.102056  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:49.137775  188656 cri.go:89] found id: ""
	I0731 21:00:49.137801  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.137809  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:49.137815  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:49.137867  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:49.173778  188656 cri.go:89] found id: ""
	I0731 21:00:49.173824  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.173832  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:49.173839  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:49.173908  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:49.207211  188656 cri.go:89] found id: ""
	I0731 21:00:49.207239  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.207247  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:49.207254  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:49.207333  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:49.244126  188656 cri.go:89] found id: ""
	I0731 21:00:49.244159  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.244180  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:49.244202  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:49.244221  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:49.299606  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:49.299646  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:49.314093  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:49.314121  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:49.384691  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:49.384712  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:49.384728  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:49.464425  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:49.464462  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:52.005670  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:52.019617  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:52.019705  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:52.053452  188656 cri.go:89] found id: ""
	I0731 21:00:52.053485  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.053494  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:52.053500  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:52.053552  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:52.094462  188656 cri.go:89] found id: ""
	I0731 21:00:52.094495  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.094504  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:52.094510  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:52.094572  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:52.134555  188656 cri.go:89] found id: ""
	I0731 21:00:52.134584  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.134595  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:52.134602  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:52.134676  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:52.168805  188656 cri.go:89] found id: ""
	I0731 21:00:52.168851  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.168863  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:52.168871  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:52.168939  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:52.203093  188656 cri.go:89] found id: ""
	I0731 21:00:52.203121  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.203132  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:52.203140  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:52.203213  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:52.237816  188656 cri.go:89] found id: ""
	I0731 21:00:52.237842  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.237850  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:52.237857  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:52.237906  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:52.272136  188656 cri.go:89] found id: ""
	I0731 21:00:52.272175  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.272194  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:52.272202  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:52.272261  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:52.306616  188656 cri.go:89] found id: ""
	I0731 21:00:52.306641  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.306649  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:52.306659  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:52.306671  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:52.372668  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:52.372690  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:52.372707  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:52.457752  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:52.457794  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:52.496087  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:52.496129  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:52.548137  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:52.548176  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:50.588204  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:53.089737  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:53.110034  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:55.110293  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:52.514848  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:55.013623  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:57.015221  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:55.063463  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:55.076922  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:55.077005  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:55.117479  188656 cri.go:89] found id: ""
	I0731 21:00:55.117511  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.117523  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:55.117531  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:55.117595  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:55.156311  188656 cri.go:89] found id: ""
	I0731 21:00:55.156339  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.156348  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:55.156354  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:55.156421  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:55.196778  188656 cri.go:89] found id: ""
	I0731 21:00:55.196807  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.196818  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:55.196826  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:55.196898  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:55.237575  188656 cri.go:89] found id: ""
	I0731 21:00:55.237605  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.237614  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:55.237620  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:55.237672  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:55.271717  188656 cri.go:89] found id: ""
	I0731 21:00:55.271746  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.271754  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:55.271760  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:55.271811  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:55.307586  188656 cri.go:89] found id: ""
	I0731 21:00:55.307618  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.307630  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:55.307637  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:55.307708  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:55.343325  188656 cri.go:89] found id: ""
	I0731 21:00:55.343352  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.343361  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:55.343367  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:55.343418  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:55.378959  188656 cri.go:89] found id: ""
	I0731 21:00:55.378988  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.378997  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:55.379008  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:55.379021  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:55.454213  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:55.454243  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:55.454260  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:55.532802  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:55.532839  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:55.575903  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:55.575940  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:55.635105  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:55.635140  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:58.149801  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:58.162682  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:58.162743  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:58.196220  188656 cri.go:89] found id: ""
	I0731 21:00:58.196245  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.196254  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:58.196260  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:58.196313  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:58.231052  188656 cri.go:89] found id: ""
	I0731 21:00:58.231083  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.231093  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:58.231099  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:58.231156  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:58.265569  188656 cri.go:89] found id: ""
	I0731 21:00:58.265599  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.265612  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:58.265633  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:58.265695  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:58.300750  188656 cri.go:89] found id: ""
	I0731 21:00:58.300779  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.300788  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:58.300793  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:58.300869  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:58.333920  188656 cri.go:89] found id: ""
	I0731 21:00:58.333949  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.333958  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:58.333963  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:58.334015  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:58.368732  188656 cri.go:89] found id: ""
	I0731 21:00:58.368759  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.368771  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:58.368787  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:58.368855  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:58.408454  188656 cri.go:89] found id: ""
	I0731 21:00:58.408488  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.408501  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:58.408510  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:58.408575  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:58.445855  188656 cri.go:89] found id: ""
	I0731 21:00:58.445888  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.445900  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:58.445913  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:58.445934  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:58.496144  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:58.496177  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:58.510708  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:58.510743  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:58.580690  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:58.580712  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:58.580725  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:58.657281  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:58.657320  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:55.591068  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:58.088264  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:57.610282  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:59.611376  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:59.017831  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:01.514115  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:01.196374  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:01.209044  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:01.209111  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:01.247313  188656 cri.go:89] found id: ""
	I0731 21:01:01.247343  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.247353  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:01.247360  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:01.247443  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:01.282269  188656 cri.go:89] found id: ""
	I0731 21:01:01.282300  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.282308  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:01.282314  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:01.282370  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:01.315598  188656 cri.go:89] found id: ""
	I0731 21:01:01.315628  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.315638  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:01.315644  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:01.315697  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:01.352492  188656 cri.go:89] found id: ""
	I0731 21:01:01.352521  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.352533  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:01.352540  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:01.352605  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:01.387858  188656 cri.go:89] found id: ""
	I0731 21:01:01.387885  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.387894  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:01.387900  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:01.387950  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:01.425014  188656 cri.go:89] found id: ""
	I0731 21:01:01.425042  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.425052  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:01.425061  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:01.425129  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:01.463068  188656 cri.go:89] found id: ""
	I0731 21:01:01.463098  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.463107  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:01.463113  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:01.463171  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:01.500174  188656 cri.go:89] found id: ""
	I0731 21:01:01.500203  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.500214  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:01.500229  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:01.500244  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:01.554350  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:01.554389  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:01.569353  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:01.569394  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:01.641074  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:01.641095  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:01.641108  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:01.722340  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:01.722377  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:00.088915  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:02.089981  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:02.109888  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:04.109951  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:04.015302  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:06.513535  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:04.264035  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:04.278374  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:04.278441  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:04.314037  188656 cri.go:89] found id: ""
	I0731 21:01:04.314068  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.314079  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:04.314087  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:04.314159  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:04.347604  188656 cri.go:89] found id: ""
	I0731 21:01:04.347635  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.347646  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:04.347653  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:04.347718  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:04.382412  188656 cri.go:89] found id: ""
	I0731 21:01:04.382442  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.382454  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:04.382462  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:04.382516  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:04.419097  188656 cri.go:89] found id: ""
	I0731 21:01:04.419130  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.419142  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:04.419150  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:04.419209  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:04.464561  188656 cri.go:89] found id: ""
	I0731 21:01:04.464592  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.464601  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:04.464607  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:04.464683  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:04.500484  188656 cri.go:89] found id: ""
	I0731 21:01:04.500510  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.500518  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:04.500524  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:04.500577  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:04.536211  188656 cri.go:89] found id: ""
	I0731 21:01:04.536239  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.536250  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:04.536257  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:04.536324  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:04.569521  188656 cri.go:89] found id: ""
	I0731 21:01:04.569548  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.569556  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:04.569567  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:04.569583  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:04.621228  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:04.621261  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:04.637500  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:04.637527  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:04.710577  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:04.710606  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:04.710623  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:04.788305  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:04.788343  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:07.329209  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:07.343021  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:07.343089  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:07.378556  188656 cri.go:89] found id: ""
	I0731 21:01:07.378588  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.378603  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:07.378610  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:07.378679  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:07.416419  188656 cri.go:89] found id: ""
	I0731 21:01:07.416455  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.416467  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:07.416474  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:07.416538  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:07.454720  188656 cri.go:89] found id: ""
	I0731 21:01:07.454749  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.454758  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:07.454764  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:07.454815  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:07.488963  188656 cri.go:89] found id: ""
	I0731 21:01:07.488995  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.489004  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:07.489009  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:07.489060  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:07.531916  188656 cri.go:89] found id: ""
	I0731 21:01:07.531949  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.531961  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:07.531967  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:07.532019  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:07.569233  188656 cri.go:89] found id: ""
	I0731 21:01:07.569266  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.569275  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:07.569281  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:07.569350  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:07.606318  188656 cri.go:89] found id: ""
	I0731 21:01:07.606349  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.606360  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:07.606368  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:07.606442  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:07.641408  188656 cri.go:89] found id: ""
	I0731 21:01:07.641436  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.641445  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:07.641454  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:07.641466  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:07.681094  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:07.681123  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:07.734600  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:07.734641  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:07.748747  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:07.748779  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:07.821775  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:07.821799  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:07.821816  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:04.590174  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:07.089655  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:06.110694  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:08.610381  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:10.611128  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:09.013688  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:11.513361  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:10.399973  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:10.412908  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:10.412986  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:10.448866  188656 cri.go:89] found id: ""
	I0731 21:01:10.448895  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.448903  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:10.448909  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:10.448966  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:10.486309  188656 cri.go:89] found id: ""
	I0731 21:01:10.486338  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.486346  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:10.486352  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:10.486411  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:10.522834  188656 cri.go:89] found id: ""
	I0731 21:01:10.522856  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.522863  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:10.522870  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:10.522929  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:10.558272  188656 cri.go:89] found id: ""
	I0731 21:01:10.558304  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.558324  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:10.558330  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:10.558391  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:10.596560  188656 cri.go:89] found id: ""
	I0731 21:01:10.596589  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.596600  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:10.596608  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:10.596668  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:10.633488  188656 cri.go:89] found id: ""
	I0731 21:01:10.633518  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.633529  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:10.633537  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:10.633597  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:10.665779  188656 cri.go:89] found id: ""
	I0731 21:01:10.665812  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.665824  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:10.665832  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:10.665895  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:10.700526  188656 cri.go:89] found id: ""
	I0731 21:01:10.700556  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.700564  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:10.700575  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:10.700587  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:10.753507  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:10.753550  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:10.768056  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:10.768089  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:10.842120  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:10.842142  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:10.842159  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:10.916532  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:10.916565  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:13.456826  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:13.471064  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:13.471130  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:13.505660  188656 cri.go:89] found id: ""
	I0731 21:01:13.505694  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.505707  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:13.505713  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:13.505775  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:13.543084  188656 cri.go:89] found id: ""
	I0731 21:01:13.543109  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.543117  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:13.543123  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:13.543182  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:13.578940  188656 cri.go:89] found id: ""
	I0731 21:01:13.578966  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.578974  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:13.578981  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:13.579047  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:13.617710  188656 cri.go:89] found id: ""
	I0731 21:01:13.617733  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.617740  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:13.617747  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:13.617810  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:13.653535  188656 cri.go:89] found id: ""
	I0731 21:01:13.653567  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.653579  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:13.653587  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:13.653658  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:13.687914  188656 cri.go:89] found id: ""
	I0731 21:01:13.687942  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.687953  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:13.687960  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:13.688031  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:13.725242  188656 cri.go:89] found id: ""
	I0731 21:01:13.725278  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.725287  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:13.725293  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:13.725372  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:13.760890  188656 cri.go:89] found id: ""
	I0731 21:01:13.760918  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.760929  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:13.760943  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:13.760958  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:13.810212  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:13.810252  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:13.824229  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:13.824259  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:01:09.588945  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:12.088514  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:14.088684  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:13.109760  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:15.109938  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:13.515603  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:16.013268  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	W0731 21:01:13.895306  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:13.895331  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:13.895344  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:13.976366  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:13.976411  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:16.520165  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:16.533970  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:16.534035  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:16.571444  188656 cri.go:89] found id: ""
	I0731 21:01:16.571474  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.571482  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:16.571488  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:16.571539  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:16.608150  188656 cri.go:89] found id: ""
	I0731 21:01:16.608176  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.608186  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:16.608194  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:16.608254  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:16.643252  188656 cri.go:89] found id: ""
	I0731 21:01:16.643283  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.643294  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:16.643302  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:16.643363  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:16.679521  188656 cri.go:89] found id: ""
	I0731 21:01:16.679552  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.679563  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:16.679571  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:16.679624  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:16.713502  188656 cri.go:89] found id: ""
	I0731 21:01:16.713532  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.713541  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:16.713547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:16.713624  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:16.748276  188656 cri.go:89] found id: ""
	I0731 21:01:16.748309  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.748318  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:16.748324  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:16.748383  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:16.783895  188656 cri.go:89] found id: ""
	I0731 21:01:16.783929  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.783940  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:16.783948  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:16.784014  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:16.817362  188656 cri.go:89] found id: ""
	I0731 21:01:16.817392  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.817415  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:16.817425  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:16.817440  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:16.872584  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:16.872637  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:16.887240  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:16.887275  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:16.961920  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:16.961949  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:16.961967  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:17.041889  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:17.041924  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:16.089420  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:18.089611  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:17.110442  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:19.111424  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:18.013772  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:20.514737  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:19.585935  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:19.600389  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:19.600475  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:19.635883  188656 cri.go:89] found id: ""
	I0731 21:01:19.635913  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.635924  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:19.635932  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:19.635995  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:19.674413  188656 cri.go:89] found id: ""
	I0731 21:01:19.674441  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.674459  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:19.674471  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:19.674538  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:19.708181  188656 cri.go:89] found id: ""
	I0731 21:01:19.708211  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.708219  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:19.708224  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:19.708292  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:19.744737  188656 cri.go:89] found id: ""
	I0731 21:01:19.744774  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.744783  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:19.744791  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:19.744849  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:19.784366  188656 cri.go:89] found id: ""
	I0731 21:01:19.784398  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.784406  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:19.784412  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:19.784465  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:19.819234  188656 cri.go:89] found id: ""
	I0731 21:01:19.819269  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.819280  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:19.819289  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:19.819355  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:19.851462  188656 cri.go:89] found id: ""
	I0731 21:01:19.851494  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.851503  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:19.851510  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:19.851563  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:19.896575  188656 cri.go:89] found id: ""
	I0731 21:01:19.896604  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.896612  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:19.896624  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:19.896640  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:19.952239  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:19.952284  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:19.969411  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:19.969442  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:20.042820  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:20.042847  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:20.042863  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:20.130070  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:20.130115  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:22.674956  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:22.688548  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:22.688616  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:22.728750  188656 cri.go:89] found id: ""
	I0731 21:01:22.728775  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.728784  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:22.728790  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:22.728844  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:22.763765  188656 cri.go:89] found id: ""
	I0731 21:01:22.763793  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.763801  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:22.763807  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:22.763858  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:22.799134  188656 cri.go:89] found id: ""
	I0731 21:01:22.799163  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.799172  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:22.799178  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:22.799237  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:22.833972  188656 cri.go:89] found id: ""
	I0731 21:01:22.833998  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.834005  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:22.834011  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:22.834060  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:22.869686  188656 cri.go:89] found id: ""
	I0731 21:01:22.869711  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.869719  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:22.869724  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:22.869776  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:22.907919  188656 cri.go:89] found id: ""
	I0731 21:01:22.907950  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.907961  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:22.907969  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:22.908035  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:22.947162  188656 cri.go:89] found id: ""
	I0731 21:01:22.947192  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.947204  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:22.947212  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:22.947273  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:22.992822  188656 cri.go:89] found id: ""
	I0731 21:01:22.992860  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.992872  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:22.992884  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:22.992900  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:23.045552  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:23.045589  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:23.059895  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:23.059925  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:23.135535  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:23.135561  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:23.135577  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:23.217468  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:23.217521  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:20.588507  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:22.588759  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:21.611467  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:24.110813  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:22.514805  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:25.012583  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:27.013095  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:25.771615  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:25.785037  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:25.785115  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:25.821070  188656 cri.go:89] found id: ""
	I0731 21:01:25.821100  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.821112  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:25.821120  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:25.821176  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:25.856174  188656 cri.go:89] found id: ""
	I0731 21:01:25.856206  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.856217  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:25.856225  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:25.856288  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:25.889440  188656 cri.go:89] found id: ""
	I0731 21:01:25.889473  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.889483  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:25.889490  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:25.889546  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:25.924770  188656 cri.go:89] found id: ""
	I0731 21:01:25.924796  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.924804  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:25.924811  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:25.924860  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:25.963529  188656 cri.go:89] found id: ""
	I0731 21:01:25.963576  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.963588  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:25.963595  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:25.963670  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:26.000033  188656 cri.go:89] found id: ""
	I0731 21:01:26.000060  188656 logs.go:276] 0 containers: []
	W0731 21:01:26.000069  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:26.000076  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:26.000133  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:26.035310  188656 cri.go:89] found id: ""
	I0731 21:01:26.035341  188656 logs.go:276] 0 containers: []
	W0731 21:01:26.035353  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:26.035359  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:26.035423  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:26.070096  188656 cri.go:89] found id: ""
	I0731 21:01:26.070119  188656 logs.go:276] 0 containers: []
	W0731 21:01:26.070127  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:26.070138  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:26.070149  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:26.141198  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:26.141220  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:26.141237  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:26.219766  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:26.219805  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:26.264836  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:26.264864  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:26.316672  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:26.316709  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:28.832882  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:24.588907  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:27.088961  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:29.089538  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:26.111336  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:28.609453  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:30.610379  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:29.014929  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:31.512827  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:28.846243  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:28.846307  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:28.880312  188656 cri.go:89] found id: ""
	I0731 21:01:28.880339  188656 logs.go:276] 0 containers: []
	W0731 21:01:28.880350  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:28.880358  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:28.880419  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:28.914625  188656 cri.go:89] found id: ""
	I0731 21:01:28.914652  188656 logs.go:276] 0 containers: []
	W0731 21:01:28.914660  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:28.914667  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:28.914726  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:28.949138  188656 cri.go:89] found id: ""
	I0731 21:01:28.949173  188656 logs.go:276] 0 containers: []
	W0731 21:01:28.949185  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:28.949192  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:28.949264  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:28.985229  188656 cri.go:89] found id: ""
	I0731 21:01:28.985258  188656 logs.go:276] 0 containers: []
	W0731 21:01:28.985266  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:28.985272  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:28.985326  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:29.021520  188656 cri.go:89] found id: ""
	I0731 21:01:29.021550  188656 logs.go:276] 0 containers: []
	W0731 21:01:29.021562  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:29.021568  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:29.021629  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:29.058639  188656 cri.go:89] found id: ""
	I0731 21:01:29.058671  188656 logs.go:276] 0 containers: []
	W0731 21:01:29.058682  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:29.058690  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:29.058755  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:29.105435  188656 cri.go:89] found id: ""
	I0731 21:01:29.105458  188656 logs.go:276] 0 containers: []
	W0731 21:01:29.105466  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:29.105472  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:29.105528  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:29.147118  188656 cri.go:89] found id: ""
	I0731 21:01:29.147144  188656 logs.go:276] 0 containers: []
	W0731 21:01:29.147152  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:29.147161  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:29.147177  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:29.231698  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:29.231735  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:29.276163  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:29.276200  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:29.330551  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:29.330589  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:29.350293  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:29.350323  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:29.456073  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:31.956964  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:31.970712  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:31.970780  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:32.009546  188656 cri.go:89] found id: ""
	I0731 21:01:32.009574  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.009585  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:32.009593  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:32.009674  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:32.046622  188656 cri.go:89] found id: ""
	I0731 21:01:32.046661  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.046672  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:32.046680  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:32.046748  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:32.080958  188656 cri.go:89] found id: ""
	I0731 21:01:32.080985  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.080993  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:32.080998  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:32.081052  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:32.117454  188656 cri.go:89] found id: ""
	I0731 21:01:32.117480  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.117489  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:32.117495  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:32.117561  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:32.152335  188656 cri.go:89] found id: ""
	I0731 21:01:32.152369  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.152380  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:32.152387  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:32.152441  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:32.186631  188656 cri.go:89] found id: ""
	I0731 21:01:32.186670  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.186682  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:32.186691  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:32.186761  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:32.221496  188656 cri.go:89] found id: ""
	I0731 21:01:32.221533  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.221544  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:32.221551  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:32.221632  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:32.256315  188656 cri.go:89] found id: ""
	I0731 21:01:32.256341  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.256350  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:32.256360  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:32.256372  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:32.295759  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:32.295788  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:32.347855  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:32.347888  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:32.360982  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:32.361012  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:32.433900  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:32.433926  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:32.433947  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:31.588474  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:33.590513  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:32.610672  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:35.110698  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:33.514600  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:36.013157  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:35.013369  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:35.027203  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:35.027298  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:35.065567  188656 cri.go:89] found id: ""
	I0731 21:01:35.065599  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.065610  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:35.065617  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:35.065686  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:35.104285  188656 cri.go:89] found id: ""
	I0731 21:01:35.104317  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.104328  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:35.104335  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:35.104430  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:35.151081  188656 cri.go:89] found id: ""
	I0731 21:01:35.151108  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.151119  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:35.151127  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:35.151190  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:35.196844  188656 cri.go:89] found id: ""
	I0731 21:01:35.196875  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.196886  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:35.196894  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:35.196964  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:35.253581  188656 cri.go:89] found id: ""
	I0731 21:01:35.253612  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.253623  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:35.253630  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:35.253703  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:35.295791  188656 cri.go:89] found id: ""
	I0731 21:01:35.295819  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.295830  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:35.295838  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:35.295904  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:35.329405  188656 cri.go:89] found id: ""
	I0731 21:01:35.329441  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.329454  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:35.329462  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:35.329526  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:35.363976  188656 cri.go:89] found id: ""
	I0731 21:01:35.364009  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.364022  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:35.364035  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:35.364051  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:35.421213  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:35.421253  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:35.436612  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:35.436646  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:35.514154  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:35.514182  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:35.514197  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:35.588048  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:35.588082  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:38.133466  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:38.147071  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:38.147142  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:38.179992  188656 cri.go:89] found id: ""
	I0731 21:01:38.180024  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.180036  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:38.180044  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:38.180116  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:38.213784  188656 cri.go:89] found id: ""
	I0731 21:01:38.213816  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.213827  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:38.213834  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:38.213901  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:38.254190  188656 cri.go:89] found id: ""
	I0731 21:01:38.254220  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.254229  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:38.254235  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:38.254284  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:38.289695  188656 cri.go:89] found id: ""
	I0731 21:01:38.289732  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.289743  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:38.289751  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:38.289819  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:38.327743  188656 cri.go:89] found id: ""
	I0731 21:01:38.327777  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.327788  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:38.327797  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:38.327853  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:38.361373  188656 cri.go:89] found id: ""
	I0731 21:01:38.361409  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.361421  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:38.361428  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:38.361501  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:38.396832  188656 cri.go:89] found id: ""
	I0731 21:01:38.396860  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.396868  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:38.396873  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:38.396923  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:38.431822  188656 cri.go:89] found id: ""
	I0731 21:01:38.431855  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.431868  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:38.431880  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:38.431895  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:38.481994  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:38.482028  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:38.495885  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:38.495911  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:38.563384  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:38.563411  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:38.563437  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:38.646806  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:38.646848  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:36.089465  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:38.590301  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:37.611057  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:40.110731  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:38.015769  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:40.513690  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:41.187323  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:41.200995  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:41.201063  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:41.241620  188656 cri.go:89] found id: ""
	I0731 21:01:41.241651  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.241663  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:41.241671  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:41.241745  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:41.279565  188656 cri.go:89] found id: ""
	I0731 21:01:41.279595  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.279604  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:41.279609  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:41.279666  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:41.320710  188656 cri.go:89] found id: ""
	I0731 21:01:41.320744  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.320755  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:41.320763  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:41.320834  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:41.356428  188656 cri.go:89] found id: ""
	I0731 21:01:41.356460  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.356472  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:41.356480  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:41.356544  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:41.390493  188656 cri.go:89] found id: ""
	I0731 21:01:41.390525  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.390536  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:41.390544  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:41.390612  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:41.424244  188656 cri.go:89] found id: ""
	I0731 21:01:41.424271  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.424282  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:41.424290  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:41.424350  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:41.459916  188656 cri.go:89] found id: ""
	I0731 21:01:41.459946  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.459955  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:41.459961  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:41.460012  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:41.493891  188656 cri.go:89] found id: ""
	I0731 21:01:41.493917  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.493926  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:41.493936  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:41.493950  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:41.544066  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:41.544106  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:41.558504  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:41.558534  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:41.632996  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:41.633021  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:41.633039  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:41.712637  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:41.712677  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:41.087979  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:43.088834  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:42.610136  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:45.109986  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:42.514059  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:44.514535  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:47.014970  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:44.255947  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:44.268961  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:44.269050  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:44.304621  188656 cri.go:89] found id: ""
	I0731 21:01:44.304656  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.304668  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:44.304676  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:44.304732  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:44.339389  188656 cri.go:89] found id: ""
	I0731 21:01:44.339429  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.339441  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:44.339448  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:44.339510  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:44.373069  188656 cri.go:89] found id: ""
	I0731 21:01:44.373095  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.373103  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:44.373110  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:44.373179  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:44.408784  188656 cri.go:89] found id: ""
	I0731 21:01:44.408812  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.408821  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:44.408829  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:44.408896  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:44.445636  188656 cri.go:89] found id: ""
	I0731 21:01:44.445671  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.445682  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:44.445690  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:44.445759  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:44.483529  188656 cri.go:89] found id: ""
	I0731 21:01:44.483565  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.483577  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:44.483585  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:44.483643  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:44.517959  188656 cri.go:89] found id: ""
	I0731 21:01:44.517980  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.517987  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:44.517993  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:44.518042  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:44.552322  188656 cri.go:89] found id: ""
	I0731 21:01:44.552367  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.552392  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:44.552405  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:44.552421  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:44.625005  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:44.625030  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:44.625043  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:44.702547  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:44.702585  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:44.741754  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:44.741792  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:44.795179  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:44.795216  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:47.309995  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:47.323993  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:47.324076  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:47.365546  188656 cri.go:89] found id: ""
	I0731 21:01:47.365576  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.365587  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:47.365595  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:47.365682  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:47.402774  188656 cri.go:89] found id: ""
	I0731 21:01:47.402810  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.402822  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:47.402831  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:47.402899  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:47.440716  188656 cri.go:89] found id: ""
	I0731 21:01:47.440746  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.440755  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:47.440761  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:47.440811  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:47.479418  188656 cri.go:89] found id: ""
	I0731 21:01:47.479450  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.479461  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:47.479469  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:47.479535  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:47.514027  188656 cri.go:89] found id: ""
	I0731 21:01:47.514065  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.514074  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:47.514081  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:47.514149  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:47.550178  188656 cri.go:89] found id: ""
	I0731 21:01:47.550203  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.550212  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:47.550218  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:47.550271  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:47.587844  188656 cri.go:89] found id: ""
	I0731 21:01:47.587873  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.587883  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:47.587891  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:47.587945  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:47.627581  188656 cri.go:89] found id: ""
	I0731 21:01:47.627608  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.627620  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:47.627633  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:47.627647  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:47.683364  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:47.683408  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:47.697882  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:47.697917  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:47.773804  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:47.773834  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:47.773848  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:47.859356  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:47.859404  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:45.090199  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:47.091328  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:47.610631  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:50.109476  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:49.514186  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:52.013486  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:50.402403  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:50.417269  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:50.417332  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:50.452762  188656 cri.go:89] found id: ""
	I0731 21:01:50.452786  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.452793  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:50.452799  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:50.452852  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:50.486741  188656 cri.go:89] found id: ""
	I0731 21:01:50.486771  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.486782  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:50.486789  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:50.486855  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:50.526144  188656 cri.go:89] found id: ""
	I0731 21:01:50.526174  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.526185  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:50.526193  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:50.526246  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:50.560957  188656 cri.go:89] found id: ""
	I0731 21:01:50.560985  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.560995  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:50.561003  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:50.561065  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:50.597228  188656 cri.go:89] found id: ""
	I0731 21:01:50.597258  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.597269  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:50.597275  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:50.597357  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:50.638153  188656 cri.go:89] found id: ""
	I0731 21:01:50.638183  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.638199  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:50.638208  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:50.638270  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:50.672236  188656 cri.go:89] found id: ""
	I0731 21:01:50.672266  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.672274  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:50.672280  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:50.672340  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:50.704069  188656 cri.go:89] found id: ""
	I0731 21:01:50.704093  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.704102  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:50.704112  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:50.704125  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:50.757973  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:50.758010  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:50.771203  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:50.771229  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:50.842937  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:50.842956  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:50.842969  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:50.925819  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:50.925857  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:53.470691  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:53.485260  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:53.485332  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:53.524110  188656 cri.go:89] found id: ""
	I0731 21:01:53.524139  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.524148  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:53.524154  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:53.524215  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:53.557642  188656 cri.go:89] found id: ""
	I0731 21:01:53.557668  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.557676  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:53.557682  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:53.557737  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:53.595594  188656 cri.go:89] found id: ""
	I0731 21:01:53.595622  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.595641  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:53.595647  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:53.595712  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:53.634458  188656 cri.go:89] found id: ""
	I0731 21:01:53.634487  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.634499  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:53.634507  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:53.634567  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:53.674124  188656 cri.go:89] found id: ""
	I0731 21:01:53.674149  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.674157  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:53.674164  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:53.674234  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:53.706861  188656 cri.go:89] found id: ""
	I0731 21:01:53.706888  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.706897  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:53.706903  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:53.706957  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:53.745476  188656 cri.go:89] found id: ""
	I0731 21:01:53.745504  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.745511  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:53.745522  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:53.745575  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:53.780847  188656 cri.go:89] found id: ""
	I0731 21:01:53.780878  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.780889  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:53.780902  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:53.780922  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:01:49.589017  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:52.088587  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:54.088885  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:52.109889  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:54.110634  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:54.014383  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:56.512884  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	W0731 21:01:53.853469  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:53.853497  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:53.853517  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:53.930506  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:53.930544  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:53.975439  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:53.975475  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:54.027903  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:54.027937  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:56.542860  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:56.557744  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:56.557813  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:56.596034  188656 cri.go:89] found id: ""
	I0731 21:01:56.596065  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.596075  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:56.596082  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:56.596146  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:56.631531  188656 cri.go:89] found id: ""
	I0731 21:01:56.631561  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.631572  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:56.631579  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:56.631653  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:56.665824  188656 cri.go:89] found id: ""
	I0731 21:01:56.665853  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.665865  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:56.665872  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:56.665940  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:56.698965  188656 cri.go:89] found id: ""
	I0731 21:01:56.698993  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.699002  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:56.699008  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:56.699074  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:56.735314  188656 cri.go:89] found id: ""
	I0731 21:01:56.735347  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.735359  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:56.735367  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:56.735443  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:56.770350  188656 cri.go:89] found id: ""
	I0731 21:01:56.770383  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.770393  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:56.770402  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:56.770485  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:56.808934  188656 cri.go:89] found id: ""
	I0731 21:01:56.808962  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.808970  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:56.808976  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:56.809027  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:56.845305  188656 cri.go:89] found id: ""
	I0731 21:01:56.845331  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.845354  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:56.845366  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:56.845383  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:56.922810  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:56.922832  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:56.922846  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:56.998009  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:56.998046  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:57.037905  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:57.037934  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:57.092438  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:57.092469  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:56.591334  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:59.089696  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:56.110825  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:58.111013  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:00.111696  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:58.513270  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:00.514474  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:59.608087  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:59.622465  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:59.622537  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:59.660221  188656 cri.go:89] found id: ""
	I0731 21:01:59.660254  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.660265  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:59.660274  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:59.660338  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:59.696158  188656 cri.go:89] found id: ""
	I0731 21:01:59.696193  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.696205  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:59.696213  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:59.696272  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:59.733607  188656 cri.go:89] found id: ""
	I0731 21:01:59.733635  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.733646  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:59.733656  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:59.733727  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:59.770298  188656 cri.go:89] found id: ""
	I0731 21:01:59.770327  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.770336  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:59.770342  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:59.770396  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:59.805630  188656 cri.go:89] found id: ""
	I0731 21:01:59.805659  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.805670  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:59.805682  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:59.805749  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:59.841064  188656 cri.go:89] found id: ""
	I0731 21:01:59.841089  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.841098  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:59.841106  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:59.841166  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:59.877237  188656 cri.go:89] found id: ""
	I0731 21:01:59.877265  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.877274  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:59.877284  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:59.877364  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:59.917102  188656 cri.go:89] found id: ""
	I0731 21:01:59.917138  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.917166  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:59.917179  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:59.917196  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:59.971806  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:59.971846  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:59.986267  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:59.986304  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:00.063185  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:00.063227  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:00.063244  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:00.148498  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:00.148541  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:02.690235  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:02.704623  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:02.704703  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:02.740557  188656 cri.go:89] found id: ""
	I0731 21:02:02.740588  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.740599  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:02.740606  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:02.740667  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:02.776340  188656 cri.go:89] found id: ""
	I0731 21:02:02.776382  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.776391  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:02.776396  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:02.776449  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:02.811645  188656 cri.go:89] found id: ""
	I0731 21:02:02.811673  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.811683  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:02.811691  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:02.811754  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:02.847226  188656 cri.go:89] found id: ""
	I0731 21:02:02.847259  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.847267  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:02.847273  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:02.847326  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:02.885591  188656 cri.go:89] found id: ""
	I0731 21:02:02.885617  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.885626  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:02.885631  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:02.885694  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:02.924250  188656 cri.go:89] found id: ""
	I0731 21:02:02.924281  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.924289  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:02.924296  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:02.924358  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:02.959608  188656 cri.go:89] found id: ""
	I0731 21:02:02.959638  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.959649  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:02.959657  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:02.959731  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:02.998175  188656 cri.go:89] found id: ""
	I0731 21:02:02.998205  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.998215  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:02.998228  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:02.998248  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:03.053320  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:03.053382  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:03.067681  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:03.067711  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:03.145222  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:03.145251  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:03.145270  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:03.228413  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:03.228456  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:01.590197  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:04.087692  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:02.610477  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:05.110544  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:03.016030  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:05.513082  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:05.780407  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:05.793872  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:05.793952  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:05.828940  188656 cri.go:89] found id: ""
	I0731 21:02:05.828971  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.828980  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:05.828987  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:05.829051  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:05.866470  188656 cri.go:89] found id: ""
	I0731 21:02:05.866503  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.866515  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:05.866522  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:05.866594  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:05.904756  188656 cri.go:89] found id: ""
	I0731 21:02:05.904792  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.904807  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:05.904814  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:05.904868  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:05.941534  188656 cri.go:89] found id: ""
	I0731 21:02:05.941564  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.941574  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:05.941581  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:05.941649  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:05.980413  188656 cri.go:89] found id: ""
	I0731 21:02:05.980453  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.980465  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:05.980472  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:05.980563  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:06.023226  188656 cri.go:89] found id: ""
	I0731 21:02:06.023258  188656 logs.go:276] 0 containers: []
	W0731 21:02:06.023269  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:06.023277  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:06.023345  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:06.061098  188656 cri.go:89] found id: ""
	I0731 21:02:06.061130  188656 logs.go:276] 0 containers: []
	W0731 21:02:06.061138  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:06.061145  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:06.061195  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:06.097825  188656 cri.go:89] found id: ""
	I0731 21:02:06.097852  188656 logs.go:276] 0 containers: []
	W0731 21:02:06.097860  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:06.097870  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:06.097883  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:06.149181  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:06.149223  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:06.164610  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:06.164651  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:06.248639  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:06.248666  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:06.248684  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:06.332445  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:06.332486  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:06.089967  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:08.588610  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:07.610691  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:09.611166  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:07.513999  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:09.514554  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:11.516493  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:08.873697  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:08.887632  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:08.887745  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:08.926002  188656 cri.go:89] found id: ""
	I0731 21:02:08.926032  188656 logs.go:276] 0 containers: []
	W0731 21:02:08.926042  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:08.926051  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:08.926117  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:08.962999  188656 cri.go:89] found id: ""
	I0731 21:02:08.963028  188656 logs.go:276] 0 containers: []
	W0731 21:02:08.963039  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:08.963047  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:08.963103  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:09.023016  188656 cri.go:89] found id: ""
	I0731 21:02:09.023043  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.023051  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:09.023057  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:09.023109  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:09.059672  188656 cri.go:89] found id: ""
	I0731 21:02:09.059699  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.059708  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:09.059714  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:09.059774  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:09.097603  188656 cri.go:89] found id: ""
	I0731 21:02:09.097635  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.097645  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:09.097653  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:09.097720  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:09.136210  188656 cri.go:89] found id: ""
	I0731 21:02:09.136240  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.136251  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:09.136259  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:09.136326  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:09.176167  188656 cri.go:89] found id: ""
	I0731 21:02:09.176204  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.176211  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:09.176218  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:09.176277  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:09.214151  188656 cri.go:89] found id: ""
	I0731 21:02:09.214180  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.214189  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:09.214199  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:09.214212  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:09.267579  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:09.267618  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:09.282420  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:09.282445  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:09.354067  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:09.354092  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:09.354111  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:09.433454  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:09.433500  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:11.979715  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:11.993050  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:11.993123  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:12.027731  188656 cri.go:89] found id: ""
	I0731 21:02:12.027759  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.027767  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:12.027773  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:12.027834  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:12.064410  188656 cri.go:89] found id: ""
	I0731 21:02:12.064442  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.064452  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:12.064459  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:12.064525  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:12.101061  188656 cri.go:89] found id: ""
	I0731 21:02:12.101096  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.101107  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:12.101115  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:12.101176  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:12.142240  188656 cri.go:89] found id: ""
	I0731 21:02:12.142271  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.142284  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:12.142292  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:12.142357  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:12.184949  188656 cri.go:89] found id: ""
	I0731 21:02:12.184980  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.184988  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:12.184994  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:12.185064  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:12.226031  188656 cri.go:89] found id: ""
	I0731 21:02:12.226068  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.226080  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:12.226089  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:12.226155  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:12.272880  188656 cri.go:89] found id: ""
	I0731 21:02:12.272913  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.272923  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:12.272931  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:12.272989  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:12.306968  188656 cri.go:89] found id: ""
	I0731 21:02:12.307011  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.307033  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:12.307068  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:12.307090  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:12.359357  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:12.359402  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:12.374817  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:12.374848  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:12.445107  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:12.445128  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:12.445141  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:12.530017  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:12.530058  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:11.088281  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:13.090442  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:12.110720  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:14.611142  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:14.013967  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:16.014021  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:15.070277  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:15.084326  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:15.084411  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:15.123513  188656 cri.go:89] found id: ""
	I0731 21:02:15.123549  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.123562  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:15.123569  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:15.123624  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:15.159855  188656 cri.go:89] found id: ""
	I0731 21:02:15.159888  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.159899  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:15.159908  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:15.159973  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:15.195879  188656 cri.go:89] found id: ""
	I0731 21:02:15.195911  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.195919  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:15.195926  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:15.195986  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:15.231216  188656 cri.go:89] found id: ""
	I0731 21:02:15.231249  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.231258  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:15.231265  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:15.231331  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:15.265711  188656 cri.go:89] found id: ""
	I0731 21:02:15.265740  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.265748  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:15.265754  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:15.265803  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:15.300991  188656 cri.go:89] found id: ""
	I0731 21:02:15.301020  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.301027  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:15.301033  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:15.301083  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:15.338507  188656 cri.go:89] found id: ""
	I0731 21:02:15.338533  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.338542  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:15.338550  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:15.338614  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:15.375540  188656 cri.go:89] found id: ""
	I0731 21:02:15.375583  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.375595  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:15.375606  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:15.375631  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:15.428903  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:15.428946  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:15.444018  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:15.444052  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:15.518807  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:15.518842  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:15.518859  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:15.602655  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:15.602693  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:18.158731  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:18.172861  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:18.172940  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:18.207451  188656 cri.go:89] found id: ""
	I0731 21:02:18.207480  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.207489  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:18.207495  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:18.207555  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:18.244974  188656 cri.go:89] found id: ""
	I0731 21:02:18.245004  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.245013  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:18.245019  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:18.245079  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:18.281589  188656 cri.go:89] found id: ""
	I0731 21:02:18.281622  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.281630  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:18.281637  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:18.281698  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:18.321413  188656 cri.go:89] found id: ""
	I0731 21:02:18.321445  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.321455  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:18.321461  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:18.321526  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:18.360600  188656 cri.go:89] found id: ""
	I0731 21:02:18.360627  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.360639  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:18.360647  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:18.360707  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:18.396312  188656 cri.go:89] found id: ""
	I0731 21:02:18.396344  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.396356  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:18.396364  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:18.396451  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:18.431586  188656 cri.go:89] found id: ""
	I0731 21:02:18.431618  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.431630  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:18.431637  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:18.431711  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:18.472995  188656 cri.go:89] found id: ""
	I0731 21:02:18.473025  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.473035  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:18.473047  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:18.473063  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:18.558826  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:18.558865  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:18.600083  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:18.600110  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:18.657944  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:18.657988  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:18.672860  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:18.672888  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:18.748806  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:15.589795  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:18.088699  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:17.112784  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:19.609312  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:18.513798  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:21.014437  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:21.249418  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:21.263304  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:21.263385  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:21.298591  188656 cri.go:89] found id: ""
	I0731 21:02:21.298624  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.298635  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:21.298643  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:21.298707  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:21.335913  188656 cri.go:89] found id: ""
	I0731 21:02:21.335939  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.335947  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:21.335954  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:21.336011  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:21.378314  188656 cri.go:89] found id: ""
	I0731 21:02:21.378347  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.378359  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:21.378368  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:21.378436  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:21.422707  188656 cri.go:89] found id: ""
	I0731 21:02:21.422738  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.422748  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:21.422757  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:21.422826  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:21.487851  188656 cri.go:89] found id: ""
	I0731 21:02:21.487878  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.487887  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:21.487893  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:21.487946  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:21.528944  188656 cri.go:89] found id: ""
	I0731 21:02:21.528970  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.528981  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:21.528990  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:21.529054  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:21.565091  188656 cri.go:89] found id: ""
	I0731 21:02:21.565118  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.565126  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:21.565132  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:21.565182  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:21.599985  188656 cri.go:89] found id: ""
	I0731 21:02:21.600015  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.600027  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:21.600041  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:21.600057  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:21.652065  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:21.652106  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:21.666497  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:21.666528  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:21.741853  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:21.741893  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:21.741919  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:21.822478  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:21.822517  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:20.089186  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:22.589558  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:21.610996  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:24.111590  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:23.513209  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:25.514400  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:24.363018  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:24.375640  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:24.375704  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:24.411383  188656 cri.go:89] found id: ""
	I0731 21:02:24.411416  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.411427  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:24.411436  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:24.411513  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:24.447536  188656 cri.go:89] found id: ""
	I0731 21:02:24.447565  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.447573  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:24.447578  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:24.447651  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:24.489270  188656 cri.go:89] found id: ""
	I0731 21:02:24.489301  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.489311  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:24.489320  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:24.489398  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:24.527891  188656 cri.go:89] found id: ""
	I0731 21:02:24.527922  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.527932  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:24.527938  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:24.527998  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:24.566854  188656 cri.go:89] found id: ""
	I0731 21:02:24.566886  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.566897  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:24.566904  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:24.566974  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:24.606234  188656 cri.go:89] found id: ""
	I0731 21:02:24.606267  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.606278  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:24.606285  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:24.606357  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:24.642880  188656 cri.go:89] found id: ""
	I0731 21:02:24.642909  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.642921  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:24.642929  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:24.642982  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:24.680069  188656 cri.go:89] found id: ""
	I0731 21:02:24.680101  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.680112  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:24.680124  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:24.680142  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:24.735337  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:24.735378  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:24.749010  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:24.749040  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:24.826406  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:24.826441  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:24.826458  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:24.906995  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:24.907049  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:27.451405  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:27.474178  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:27.474251  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:27.514912  188656 cri.go:89] found id: ""
	I0731 21:02:27.514938  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.514945  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:27.514951  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:27.515007  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:27.552850  188656 cri.go:89] found id: ""
	I0731 21:02:27.552880  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.552890  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:27.552896  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:27.552953  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:27.590468  188656 cri.go:89] found id: ""
	I0731 21:02:27.590496  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.590503  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:27.590509  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:27.590572  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:27.626295  188656 cri.go:89] found id: ""
	I0731 21:02:27.626322  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.626330  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:27.626339  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:27.626391  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:27.662654  188656 cri.go:89] found id: ""
	I0731 21:02:27.662690  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.662701  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:27.662708  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:27.662770  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:27.699528  188656 cri.go:89] found id: ""
	I0731 21:02:27.699558  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.699566  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:27.699572  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:27.699639  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:27.740501  188656 cri.go:89] found id: ""
	I0731 21:02:27.740528  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.740539  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:27.740547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:27.740613  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:27.778919  188656 cri.go:89] found id: ""
	I0731 21:02:27.778954  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.778966  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:27.778980  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:27.778999  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:27.815475  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:27.815500  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:27.866578  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:27.866615  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:27.880799  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:27.880830  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:27.948987  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:27.949014  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:27.949032  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:24.596180  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:27.088624  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:26.610897  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:29.110263  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:28.014828  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:30.514006  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:30.532314  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:30.546245  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:30.546317  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:30.581736  188656 cri.go:89] found id: ""
	I0731 21:02:30.581763  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.581772  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:30.581778  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:30.581837  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:30.618790  188656 cri.go:89] found id: ""
	I0731 21:02:30.618816  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.618824  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:30.618830  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:30.618886  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:30.654504  188656 cri.go:89] found id: ""
	I0731 21:02:30.654530  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.654538  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:30.654544  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:30.654603  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:30.690570  188656 cri.go:89] found id: ""
	I0731 21:02:30.690598  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.690609  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:30.690617  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:30.690683  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:30.739676  188656 cri.go:89] found id: ""
	I0731 21:02:30.739705  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.739715  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:30.739723  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:30.739789  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:30.777860  188656 cri.go:89] found id: ""
	I0731 21:02:30.777891  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.777902  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:30.777911  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:30.777995  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:30.814036  188656 cri.go:89] found id: ""
	I0731 21:02:30.814073  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.814088  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:30.814096  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:30.814168  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:30.847262  188656 cri.go:89] found id: ""
	I0731 21:02:30.847292  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.847304  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:30.847316  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:30.847338  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:30.898556  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:30.898596  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:30.912940  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:30.912974  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:30.987384  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:30.987405  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:30.987419  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:31.071376  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:31.071416  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:33.613677  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:33.628304  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:33.628380  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:33.662932  188656 cri.go:89] found id: ""
	I0731 21:02:33.662965  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.662977  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:33.662985  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:33.663055  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:33.697445  188656 cri.go:89] found id: ""
	I0731 21:02:33.697477  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.697487  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:33.697493  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:33.697553  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:33.734480  188656 cri.go:89] found id: ""
	I0731 21:02:33.734516  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.734527  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:33.734536  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:33.734614  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:33.770069  188656 cri.go:89] found id: ""
	I0731 21:02:33.770095  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.770104  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:33.770111  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:33.770194  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:33.806315  188656 cri.go:89] found id: ""
	I0731 21:02:33.806341  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.806350  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:33.806356  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:33.806408  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:29.592432  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:32.088842  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:34.089378  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:31.112420  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:33.611815  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:33.014022  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:35.014517  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:37.018514  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:33.842747  188656 cri.go:89] found id: ""
	I0731 21:02:33.842775  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.842782  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:33.842789  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:33.842856  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:33.877581  188656 cri.go:89] found id: ""
	I0731 21:02:33.877607  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.877616  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:33.877622  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:33.877682  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:33.913238  188656 cri.go:89] found id: ""
	I0731 21:02:33.913263  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.913271  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:33.913282  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:33.913298  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:33.967112  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:33.967148  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:33.980961  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:33.980994  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:34.054886  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:34.054917  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:34.054939  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:34.143088  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:34.143127  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:36.687110  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:36.700649  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:36.700725  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:36.737796  188656 cri.go:89] found id: ""
	I0731 21:02:36.737829  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.737841  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:36.737849  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:36.737916  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:36.773010  188656 cri.go:89] found id: ""
	I0731 21:02:36.773048  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.773059  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:36.773067  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:36.773136  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:36.813945  188656 cri.go:89] found id: ""
	I0731 21:02:36.813978  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.813988  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:36.813994  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:36.814047  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:36.849826  188656 cri.go:89] found id: ""
	I0731 21:02:36.849860  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.849872  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:36.849880  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:36.849943  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:36.887200  188656 cri.go:89] found id: ""
	I0731 21:02:36.887233  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.887244  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:36.887253  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:36.887391  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:36.922529  188656 cri.go:89] found id: ""
	I0731 21:02:36.922562  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.922573  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:36.922582  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:36.922644  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:36.958119  188656 cri.go:89] found id: ""
	I0731 21:02:36.958154  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.958166  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:36.958174  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:36.958240  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:37.001071  188656 cri.go:89] found id: ""
	I0731 21:02:37.001104  188656 logs.go:276] 0 containers: []
	W0731 21:02:37.001113  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:37.001123  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:37.001136  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:37.041248  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:37.041288  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:37.100519  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:37.100558  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:37.115157  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:37.115188  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:37.191232  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:37.191259  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:37.191277  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:36.588213  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:38.589224  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:36.109307  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:38.110675  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:40.111284  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:39.514052  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:42.013265  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:39.772834  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:39.788137  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:39.788203  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:39.827329  188656 cri.go:89] found id: ""
	I0731 21:02:39.827361  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.827371  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:39.827378  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:39.827458  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:39.864855  188656 cri.go:89] found id: ""
	I0731 21:02:39.864882  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.864889  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:39.864897  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:39.864958  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:39.901955  188656 cri.go:89] found id: ""
	I0731 21:02:39.901981  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.901990  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:39.901996  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:39.902059  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:39.941376  188656 cri.go:89] found id: ""
	I0731 21:02:39.941402  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.941412  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:39.941418  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:39.941473  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:39.975321  188656 cri.go:89] found id: ""
	I0731 21:02:39.975352  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.975364  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:39.975394  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:39.975465  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:40.010106  188656 cri.go:89] found id: ""
	I0731 21:02:40.010136  188656 logs.go:276] 0 containers: []
	W0731 21:02:40.010148  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:40.010157  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:40.010220  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:40.043963  188656 cri.go:89] found id: ""
	I0731 21:02:40.043997  188656 logs.go:276] 0 containers: []
	W0731 21:02:40.044009  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:40.044017  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:40.044089  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:40.079178  188656 cri.go:89] found id: ""
	I0731 21:02:40.079216  188656 logs.go:276] 0 containers: []
	W0731 21:02:40.079224  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:40.079234  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:40.079248  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:40.141115  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:40.141158  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:40.156722  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:40.156758  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:40.233758  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:40.233782  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:40.233797  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:40.317316  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:40.317375  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:42.858649  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:42.872135  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:42.872221  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:42.911966  188656 cri.go:89] found id: ""
	I0731 21:02:42.911998  188656 logs.go:276] 0 containers: []
	W0731 21:02:42.912007  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:42.912014  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:42.912081  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:42.950036  188656 cri.go:89] found id: ""
	I0731 21:02:42.950070  188656 logs.go:276] 0 containers: []
	W0731 21:02:42.950079  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:42.950085  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:42.950138  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:42.987201  188656 cri.go:89] found id: ""
	I0731 21:02:42.987233  188656 logs.go:276] 0 containers: []
	W0731 21:02:42.987245  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:42.987253  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:42.987326  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:43.027250  188656 cri.go:89] found id: ""
	I0731 21:02:43.027285  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.027297  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:43.027306  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:43.027374  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:43.063419  188656 cri.go:89] found id: ""
	I0731 21:02:43.063448  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.063456  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:43.063463  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:43.063527  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:43.101155  188656 cri.go:89] found id: ""
	I0731 21:02:43.101184  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.101193  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:43.101199  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:43.101249  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:43.142633  188656 cri.go:89] found id: ""
	I0731 21:02:43.142658  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.142667  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:43.142675  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:43.142741  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:43.177747  188656 cri.go:89] found id: ""
	I0731 21:02:43.177780  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.177789  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:43.177799  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:43.177813  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:43.228074  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:43.228114  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:43.242132  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:43.242165  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:43.313026  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:43.313054  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:43.313072  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:43.394620  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:43.394663  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:40.589306  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:42.589428  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:42.612236  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:45.110401  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:44.513370  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:46.514350  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:45.937932  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:45.951871  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:45.951964  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:45.987615  188656 cri.go:89] found id: ""
	I0731 21:02:45.987642  188656 logs.go:276] 0 containers: []
	W0731 21:02:45.987650  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:45.987656  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:45.987715  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:46.022632  188656 cri.go:89] found id: ""
	I0731 21:02:46.022659  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.022667  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:46.022674  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:46.022746  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:46.061153  188656 cri.go:89] found id: ""
	I0731 21:02:46.061182  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.061191  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:46.061196  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:46.061246  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:46.099168  188656 cri.go:89] found id: ""
	I0731 21:02:46.099197  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.099206  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:46.099212  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:46.099266  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:46.137269  188656 cri.go:89] found id: ""
	I0731 21:02:46.137300  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.137312  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:46.137321  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:46.137403  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:46.172330  188656 cri.go:89] found id: ""
	I0731 21:02:46.172391  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.172404  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:46.172417  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:46.172489  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:46.213314  188656 cri.go:89] found id: ""
	I0731 21:02:46.213358  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.213370  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:46.213378  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:46.213451  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:46.248663  188656 cri.go:89] found id: ""
	I0731 21:02:46.248697  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.248707  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:46.248719  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:46.248735  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:46.305433  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:46.305472  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:46.319065  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:46.319098  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:46.387025  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:46.387046  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:46.387058  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:46.476721  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:46.476769  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:44.589757  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:47.089954  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:47.112823  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:49.114163  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:49.014193  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:51.014760  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:49.020882  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:49.036502  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:49.036573  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:49.076478  188656 cri.go:89] found id: ""
	I0731 21:02:49.076509  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.076518  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:49.076525  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:49.076578  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:49.116065  188656 cri.go:89] found id: ""
	I0731 21:02:49.116098  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.116106  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:49.116112  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:49.116168  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:49.153237  188656 cri.go:89] found id: ""
	I0731 21:02:49.153274  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.153287  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:49.153295  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:49.153385  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:49.192821  188656 cri.go:89] found id: ""
	I0731 21:02:49.192849  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.192858  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:49.192864  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:49.192918  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:49.230627  188656 cri.go:89] found id: ""
	I0731 21:02:49.230660  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.230671  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:49.230679  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:49.230749  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:49.266575  188656 cri.go:89] found id: ""
	I0731 21:02:49.266603  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.266611  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:49.266617  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:49.266688  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:49.312489  188656 cri.go:89] found id: ""
	I0731 21:02:49.312522  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.312533  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:49.312541  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:49.312613  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:49.348907  188656 cri.go:89] found id: ""
	I0731 21:02:49.348932  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.348941  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:49.348950  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:49.348965  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:49.363229  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:49.363267  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:49.435708  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:49.435732  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:49.435745  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:49.522002  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:49.522047  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:49.566823  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:49.566868  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:52.122660  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:52.136559  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:52.136629  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:52.173198  188656 cri.go:89] found id: ""
	I0731 21:02:52.173227  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.173236  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:52.173242  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:52.173310  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:52.208464  188656 cri.go:89] found id: ""
	I0731 21:02:52.208503  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.208514  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:52.208521  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:52.208590  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:52.246052  188656 cri.go:89] found id: ""
	I0731 21:02:52.246084  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.246091  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:52.246098  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:52.246160  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:52.281798  188656 cri.go:89] found id: ""
	I0731 21:02:52.281831  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.281843  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:52.281852  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:52.281918  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:52.318924  188656 cri.go:89] found id: ""
	I0731 21:02:52.318954  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.318975  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:52.318983  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:52.319052  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:52.356752  188656 cri.go:89] found id: ""
	I0731 21:02:52.356788  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.356800  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:52.356809  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:52.356874  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:52.391507  188656 cri.go:89] found id: ""
	I0731 21:02:52.391537  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.391545  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:52.391551  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:52.391602  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:52.430714  188656 cri.go:89] found id: ""
	I0731 21:02:52.430749  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.430761  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:52.430774  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:52.430792  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:52.482600  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:52.482629  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:52.535317  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:52.535361  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:52.549835  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:52.549874  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:52.628319  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:52.628347  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:52.628365  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:49.590499  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:52.089170  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:54.089832  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:51.610237  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:54.112782  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:53.513932  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:55.516784  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:55.216678  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:55.231142  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:55.231225  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:55.266283  188656 cri.go:89] found id: ""
	I0731 21:02:55.266321  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.266334  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:55.266341  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:55.266399  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:55.301457  188656 cri.go:89] found id: ""
	I0731 21:02:55.301493  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.301506  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:55.301514  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:55.301574  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:55.338427  188656 cri.go:89] found id: ""
	I0731 21:02:55.338453  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.338461  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:55.338467  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:55.338521  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:55.373718  188656 cri.go:89] found id: ""
	I0731 21:02:55.373748  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.373757  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:55.373764  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:55.373846  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:55.410989  188656 cri.go:89] found id: ""
	I0731 21:02:55.411022  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.411034  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:55.411042  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:55.411100  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:55.452867  188656 cri.go:89] found id: ""
	I0731 21:02:55.452904  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.452915  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:55.452924  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:55.452995  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:55.512781  188656 cri.go:89] found id: ""
	I0731 21:02:55.512809  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.512821  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:55.512829  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:55.512894  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:55.550460  188656 cri.go:89] found id: ""
	I0731 21:02:55.550487  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.550495  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:55.550505  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:55.550521  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:55.625776  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:55.625804  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:55.625821  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:55.711276  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:55.711322  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:55.765078  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:55.765111  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:55.818131  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:55.818176  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:58.332914  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:58.346908  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:58.346992  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:58.383641  188656 cri.go:89] found id: ""
	I0731 21:02:58.383686  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.383695  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:58.383700  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:58.383753  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:58.419538  188656 cri.go:89] found id: ""
	I0731 21:02:58.419566  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.419576  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:58.419584  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:58.419649  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:58.457036  188656 cri.go:89] found id: ""
	I0731 21:02:58.457069  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.457080  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:58.457088  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:58.457162  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:58.497596  188656 cri.go:89] found id: ""
	I0731 21:02:58.497621  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.497629  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:58.497635  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:58.497706  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:58.538184  188656 cri.go:89] found id: ""
	I0731 21:02:58.538211  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.538220  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:58.538226  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:58.538291  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:58.584428  188656 cri.go:89] found id: ""
	I0731 21:02:58.584457  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.584468  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:58.584476  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:58.584537  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:58.625052  188656 cri.go:89] found id: ""
	I0731 21:02:58.625084  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.625096  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:58.625103  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:58.625171  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:58.662222  188656 cri.go:89] found id: ""
	I0731 21:02:58.662248  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.662256  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:58.662266  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:58.662278  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:58.740491  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:58.740530  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:58.782685  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:58.782714  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:58.833620  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:58.833668  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:56.091277  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:58.589516  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:56.609399  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:58.610957  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:58.013927  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:00.015179  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:58.848679  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:58.848713  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:58.925496  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:01.426171  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:01.440261  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:01.440341  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:01.477362  188656 cri.go:89] found id: ""
	I0731 21:03:01.477393  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.477405  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:01.477414  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:01.477483  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:01.516640  188656 cri.go:89] found id: ""
	I0731 21:03:01.516675  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.516692  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:01.516701  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:01.516764  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:01.560713  188656 cri.go:89] found id: ""
	I0731 21:03:01.560744  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.560756  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:01.560762  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:01.560844  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:01.604050  188656 cri.go:89] found id: ""
	I0731 21:03:01.604086  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.604097  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:01.604105  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:01.604170  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:01.641358  188656 cri.go:89] found id: ""
	I0731 21:03:01.641391  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.641401  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:01.641406  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:01.641471  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:01.677332  188656 cri.go:89] found id: ""
	I0731 21:03:01.677380  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.677390  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:01.677397  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:01.677459  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:01.713781  188656 cri.go:89] found id: ""
	I0731 21:03:01.713815  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.713826  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:01.713833  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:01.713914  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:01.757499  188656 cri.go:89] found id: ""
	I0731 21:03:01.757543  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.757552  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:01.757563  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:01.757575  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:01.832330  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:01.832370  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:01.832384  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:01.918996  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:01.919050  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:01.979268  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:01.979307  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:02.037528  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:02.037564  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:00.591373  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:03.089405  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:01.110471  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:03.611348  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:02.513998  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:05.015060  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:04.552758  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:04.566881  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:04.566960  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:04.604631  188656 cri.go:89] found id: ""
	I0731 21:03:04.604669  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.604680  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:04.604688  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:04.604791  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:04.644027  188656 cri.go:89] found id: ""
	I0731 21:03:04.644052  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.644061  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:04.644068  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:04.644134  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:04.680010  188656 cri.go:89] found id: ""
	I0731 21:03:04.680037  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.680045  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:04.680050  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:04.680102  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:04.717095  188656 cri.go:89] found id: ""
	I0731 21:03:04.717123  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.717133  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:04.717140  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:04.717212  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:04.755297  188656 cri.go:89] found id: ""
	I0731 21:03:04.755324  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.755331  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:04.755337  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:04.755387  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:04.792073  188656 cri.go:89] found id: ""
	I0731 21:03:04.792104  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.792113  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:04.792119  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:04.792168  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:04.828428  188656 cri.go:89] found id: ""
	I0731 21:03:04.828460  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.828468  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:04.828475  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:04.828541  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:04.863871  188656 cri.go:89] found id: ""
	I0731 21:03:04.863905  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.863916  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:04.863929  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:04.863946  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:04.879591  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:04.879626  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:04.962199  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:04.962227  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:04.962245  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:05.048502  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:05.048547  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:05.090812  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:05.090838  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:07.647307  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:07.664586  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:07.664656  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:07.719851  188656 cri.go:89] found id: ""
	I0731 21:03:07.719887  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.719899  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:07.719908  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:07.719978  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:07.778295  188656 cri.go:89] found id: ""
	I0731 21:03:07.778330  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.778343  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:07.778350  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:07.778417  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:07.817911  188656 cri.go:89] found id: ""
	I0731 21:03:07.817937  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.817947  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:07.817954  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:07.818004  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:07.853177  188656 cri.go:89] found id: ""
	I0731 21:03:07.853211  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.853222  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:07.853229  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:07.853308  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:07.888992  188656 cri.go:89] found id: ""
	I0731 21:03:07.889020  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.889046  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:07.889055  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:07.889133  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:07.924327  188656 cri.go:89] found id: ""
	I0731 21:03:07.924358  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.924369  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:07.924377  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:07.924461  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:07.964438  188656 cri.go:89] found id: ""
	I0731 21:03:07.964470  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.964480  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:07.964489  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:07.964572  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:08.003566  188656 cri.go:89] found id: ""
	I0731 21:03:08.003610  188656 logs.go:276] 0 containers: []
	W0731 21:03:08.003621  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:08.003634  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:08.003651  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:08.044246  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:08.044286  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:08.097479  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:08.097517  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:08.113636  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:08.113663  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:08.187217  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:08.187244  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:08.187261  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:05.090205  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:07.589488  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:06.110184  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:08.111598  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:10.611986  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:07.513036  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:09.513637  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:11.514176  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:10.771248  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:10.786159  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:10.786232  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:10.823724  188656 cri.go:89] found id: ""
	I0731 21:03:10.823756  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.823769  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:10.823777  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:10.823846  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:10.862440  188656 cri.go:89] found id: ""
	I0731 21:03:10.862468  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.862480  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:10.862488  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:10.862544  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:10.901499  188656 cri.go:89] found id: ""
	I0731 21:03:10.901527  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.901539  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:10.901547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:10.901611  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:10.940255  188656 cri.go:89] found id: ""
	I0731 21:03:10.940279  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.940287  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:10.940293  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:10.940356  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:10.975315  188656 cri.go:89] found id: ""
	I0731 21:03:10.975344  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.975353  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:10.975360  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:10.975420  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:11.011453  188656 cri.go:89] found id: ""
	I0731 21:03:11.011482  188656 logs.go:276] 0 containers: []
	W0731 21:03:11.011538  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:11.011549  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:11.011611  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:11.047846  188656 cri.go:89] found id: ""
	I0731 21:03:11.047887  188656 logs.go:276] 0 containers: []
	W0731 21:03:11.047899  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:11.047907  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:11.047972  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:11.086243  188656 cri.go:89] found id: ""
	I0731 21:03:11.086271  188656 logs.go:276] 0 containers: []
	W0731 21:03:11.086282  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:11.086293  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:11.086309  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:11.139390  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:11.139430  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:11.154637  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:11.154669  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:11.225996  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:11.226019  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:11.226035  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:11.305235  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:11.305280  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:09.589831  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:11.590312  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:14.089750  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:13.110191  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:15.112258  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:14.013609  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:16.014143  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:13.845792  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:13.859185  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:13.859261  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:13.896017  188656 cri.go:89] found id: ""
	I0731 21:03:13.896047  188656 logs.go:276] 0 containers: []
	W0731 21:03:13.896055  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:13.896061  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:13.896123  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:13.932442  188656 cri.go:89] found id: ""
	I0731 21:03:13.932475  188656 logs.go:276] 0 containers: []
	W0731 21:03:13.932486  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:13.932494  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:13.932564  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:13.971233  188656 cri.go:89] found id: ""
	I0731 21:03:13.971265  188656 logs.go:276] 0 containers: []
	W0731 21:03:13.971274  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:13.971280  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:13.971331  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:14.009757  188656 cri.go:89] found id: ""
	I0731 21:03:14.009787  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.009796  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:14.009805  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:14.009870  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:14.047946  188656 cri.go:89] found id: ""
	I0731 21:03:14.047979  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.047990  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:14.047998  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:14.048056  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:14.084687  188656 cri.go:89] found id: ""
	I0731 21:03:14.084720  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.084731  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:14.084739  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:14.084805  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:14.124831  188656 cri.go:89] found id: ""
	I0731 21:03:14.124861  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.124870  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:14.124876  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:14.124929  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:14.161242  188656 cri.go:89] found id: ""
	I0731 21:03:14.161275  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.161286  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:14.161295  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:14.161308  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:14.241060  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:14.241115  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:14.282382  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:14.282414  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:14.335201  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:14.335249  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:14.351345  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:14.351379  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:14.436524  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:16.937313  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:16.951403  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:16.951490  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:16.991735  188656 cri.go:89] found id: ""
	I0731 21:03:16.991766  188656 logs.go:276] 0 containers: []
	W0731 21:03:16.991777  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:16.991785  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:16.991852  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:17.030327  188656 cri.go:89] found id: ""
	I0731 21:03:17.030353  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.030360  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:17.030366  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:17.030419  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:17.068161  188656 cri.go:89] found id: ""
	I0731 21:03:17.068195  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.068206  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:17.068214  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:17.068286  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:17.105561  188656 cri.go:89] found id: ""
	I0731 21:03:17.105590  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.105601  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:17.105609  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:17.105684  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:17.144503  188656 cri.go:89] found id: ""
	I0731 21:03:17.144529  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.144540  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:17.144547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:17.144610  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:17.183709  188656 cri.go:89] found id: ""
	I0731 21:03:17.183738  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.183747  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:17.183753  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:17.183815  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:17.222083  188656 cri.go:89] found id: ""
	I0731 21:03:17.222109  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.222117  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:17.222124  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:17.222178  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:17.259503  188656 cri.go:89] found id: ""
	I0731 21:03:17.259534  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.259547  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:17.259561  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:17.259578  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:17.300603  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:17.300642  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:17.352194  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:17.352235  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:17.367179  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:17.367209  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:17.440051  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:17.440074  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:17.440088  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:16.589914  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:18.082985  188133 pod_ready.go:81] duration metric: took 4m0.000734125s for pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace to be "Ready" ...
	E0731 21:03:18.083015  188133 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0731 21:03:18.083039  188133 pod_ready.go:38] duration metric: took 4m12.543404692s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:03:18.083069  188133 kubeadm.go:597] duration metric: took 4m20.473129745s to restartPrimaryControlPlane
	W0731 21:03:18.083176  188133 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 21:03:18.083210  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:03:17.610274  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:19.611592  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:18.514266  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:20.514967  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:20.027644  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:20.041735  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:20.041826  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:20.077436  188656 cri.go:89] found id: ""
	I0731 21:03:20.077470  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.077483  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:20.077491  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:20.077558  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:20.117420  188656 cri.go:89] found id: ""
	I0731 21:03:20.117449  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.117459  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:20.117466  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:20.117533  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:20.157794  188656 cri.go:89] found id: ""
	I0731 21:03:20.157827  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.157838  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:20.157847  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:20.157914  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:20.193760  188656 cri.go:89] found id: ""
	I0731 21:03:20.193788  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.193796  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:20.193803  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:20.193856  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:20.231731  188656 cri.go:89] found id: ""
	I0731 21:03:20.231764  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.231777  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:20.231785  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:20.231856  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:20.268666  188656 cri.go:89] found id: ""
	I0731 21:03:20.268697  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.268709  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:20.268717  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:20.268786  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:20.304355  188656 cri.go:89] found id: ""
	I0731 21:03:20.304392  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.304406  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:20.304414  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:20.304478  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:20.343886  188656 cri.go:89] found id: ""
	I0731 21:03:20.343915  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.343927  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:20.343940  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:20.343957  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:20.358460  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:20.358494  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:20.435473  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:20.435499  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:20.435522  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:20.517961  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:20.518002  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:20.561528  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:20.561567  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:23.119570  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:23.134276  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:23.134366  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:23.172808  188656 cri.go:89] found id: ""
	I0731 21:03:23.172837  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.172846  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:23.172852  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:23.172914  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:23.208038  188656 cri.go:89] found id: ""
	I0731 21:03:23.208067  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.208080  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:23.208086  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:23.208140  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:23.244493  188656 cri.go:89] found id: ""
	I0731 21:03:23.244523  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.244533  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:23.244539  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:23.244605  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:23.280474  188656 cri.go:89] found id: ""
	I0731 21:03:23.280503  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.280510  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:23.280517  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:23.280581  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:23.317381  188656 cri.go:89] found id: ""
	I0731 21:03:23.317415  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.317428  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:23.317441  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:23.317511  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:23.357023  188656 cri.go:89] found id: ""
	I0731 21:03:23.357051  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.357062  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:23.357071  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:23.357134  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:23.400176  188656 cri.go:89] found id: ""
	I0731 21:03:23.400211  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.400223  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:23.400230  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:23.400298  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:23.440157  188656 cri.go:89] found id: ""
	I0731 21:03:23.440190  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.440201  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:23.440213  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:23.440234  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:23.494762  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:23.494802  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:23.511463  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:23.511510  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:23.600359  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:23.600383  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:23.600403  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:23.682683  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:23.682723  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:22.111495  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:24.112248  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:23.013460  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:25.014605  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:27.014900  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:26.225923  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:26.245708  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:26.245791  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:26.282882  188656 cri.go:89] found id: ""
	I0731 21:03:26.282910  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.282920  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:26.282928  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:26.282987  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:26.324227  188656 cri.go:89] found id: ""
	I0731 21:03:26.324268  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.324279  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:26.324287  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:26.324349  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:26.365996  188656 cri.go:89] found id: ""
	I0731 21:03:26.366027  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.366038  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:26.366047  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:26.366119  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:26.403790  188656 cri.go:89] found id: ""
	I0731 21:03:26.403823  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.403835  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:26.403844  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:26.403915  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:26.442924  188656 cri.go:89] found id: ""
	I0731 21:03:26.442947  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.442957  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:26.442964  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:26.443026  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:26.482260  188656 cri.go:89] found id: ""
	I0731 21:03:26.482286  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.482294  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:26.482300  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:26.482364  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:26.526385  188656 cri.go:89] found id: ""
	I0731 21:03:26.526420  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.526432  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:26.526442  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:26.526511  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:26.565217  188656 cri.go:89] found id: ""
	I0731 21:03:26.565250  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.565262  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:26.565275  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:26.565294  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:26.623437  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:26.623478  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:26.639642  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:26.639683  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:26.720274  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:26.720309  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:26.720325  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:26.799689  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:26.799728  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:26.111147  188266 pod_ready.go:81] duration metric: took 4m0.007359775s for pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace to be "Ready" ...
	E0731 21:03:26.111173  188266 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0731 21:03:26.111180  188266 pod_ready.go:38] duration metric: took 4m2.82978193s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:03:26.111195  188266 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:03:26.111220  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:26.111267  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:26.179210  188266 cri.go:89] found id: "89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:26.179240  188266 cri.go:89] found id: ""
	I0731 21:03:26.179251  188266 logs.go:276] 1 containers: [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718]
	I0731 21:03:26.179315  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.184349  188266 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:26.184430  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:26.221238  188266 cri.go:89] found id: "d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:26.221267  188266 cri.go:89] found id: ""
	I0731 21:03:26.221277  188266 logs.go:276] 1 containers: [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c]
	I0731 21:03:26.221349  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.225908  188266 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:26.225985  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:26.276864  188266 cri.go:89] found id: "987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:26.276895  188266 cri.go:89] found id: ""
	I0731 21:03:26.276907  188266 logs.go:276] 1 containers: [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025]
	I0731 21:03:26.276974  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.281921  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:26.282003  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:26.320868  188266 cri.go:89] found id: "936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:26.320903  188266 cri.go:89] found id: ""
	I0731 21:03:26.320914  188266 logs.go:276] 1 containers: [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447]
	I0731 21:03:26.320984  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.326203  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:26.326272  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:26.378409  188266 cri.go:89] found id: "c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:26.378433  188266 cri.go:89] found id: ""
	I0731 21:03:26.378442  188266 logs.go:276] 1 containers: [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e]
	I0731 21:03:26.378504  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.384006  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:26.384111  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:26.431113  188266 cri.go:89] found id: "c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:26.431147  188266 cri.go:89] found id: ""
	I0731 21:03:26.431158  188266 logs.go:276] 1 containers: [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085]
	I0731 21:03:26.431226  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.437136  188266 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:26.437213  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:26.484223  188266 cri.go:89] found id: ""
	I0731 21:03:26.484247  188266 logs.go:276] 0 containers: []
	W0731 21:03:26.484257  188266 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:26.484263  188266 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:03:26.484319  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:03:26.530433  188266 cri.go:89] found id: "701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:26.530470  188266 cri.go:89] found id: "23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:26.530476  188266 cri.go:89] found id: ""
	I0731 21:03:26.530486  188266 logs.go:276] 2 containers: [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f]
	I0731 21:03:26.530551  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.535747  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.541379  188266 logs.go:123] Gathering logs for storage-provisioner [23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f] ...
	I0731 21:03:26.541406  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:26.586730  188266 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:26.586769  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:27.133617  188266 logs.go:123] Gathering logs for kube-apiserver [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718] ...
	I0731 21:03:27.133672  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:27.183805  188266 logs.go:123] Gathering logs for etcd [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c] ...
	I0731 21:03:27.183846  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:27.226579  188266 logs.go:123] Gathering logs for kube-proxy [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e] ...
	I0731 21:03:27.226620  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:27.290635  188266 logs.go:123] Gathering logs for coredns [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025] ...
	I0731 21:03:27.290671  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:27.330700  188266 logs.go:123] Gathering logs for kube-scheduler [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447] ...
	I0731 21:03:27.330732  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:27.370882  188266 logs.go:123] Gathering logs for kube-controller-manager [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085] ...
	I0731 21:03:27.370918  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:27.426426  188266 logs.go:123] Gathering logs for storage-provisioner [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173] ...
	I0731 21:03:27.426471  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:27.466359  188266 logs.go:123] Gathering logs for container status ...
	I0731 21:03:27.466396  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:27.515202  188266 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:27.515235  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:27.569081  188266 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:27.569122  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:27.586776  188266 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:27.586809  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:03:30.223314  188266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:30.241046  188266 api_server.go:72] duration metric: took 4m14.179869513s to wait for apiserver process to appear ...
	I0731 21:03:30.241073  188266 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:03:30.241118  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:30.241188  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:30.281267  188266 cri.go:89] found id: "89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:30.281303  188266 cri.go:89] found id: ""
	I0731 21:03:30.281314  188266 logs.go:276] 1 containers: [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718]
	I0731 21:03:30.281397  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.285857  188266 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:30.285927  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:30.321742  188266 cri.go:89] found id: "d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:30.321770  188266 cri.go:89] found id: ""
	I0731 21:03:30.321779  188266 logs.go:276] 1 containers: [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c]
	I0731 21:03:30.321841  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.326210  188266 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:30.326284  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:30.367998  188266 cri.go:89] found id: "987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:30.368025  188266 cri.go:89] found id: ""
	I0731 21:03:30.368036  188266 logs.go:276] 1 containers: [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025]
	I0731 21:03:30.368101  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.372340  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:30.372412  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:30.413689  188266 cri.go:89] found id: "936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:30.413714  188266 cri.go:89] found id: ""
	I0731 21:03:30.413725  188266 logs.go:276] 1 containers: [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447]
	I0731 21:03:30.413789  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.418525  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:30.418604  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:30.458505  188266 cri.go:89] found id: "c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:30.458530  188266 cri.go:89] found id: ""
	I0731 21:03:30.458539  188266 logs.go:276] 1 containers: [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e]
	I0731 21:03:30.458587  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.462993  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:30.463058  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:30.500683  188266 cri.go:89] found id: "c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:30.500711  188266 cri.go:89] found id: ""
	I0731 21:03:30.500722  188266 logs.go:276] 1 containers: [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085]
	I0731 21:03:30.500785  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.506197  188266 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:30.506277  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:30.545243  188266 cri.go:89] found id: ""
	I0731 21:03:30.545273  188266 logs.go:276] 0 containers: []
	W0731 21:03:30.545284  188266 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:30.545290  188266 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:03:30.545371  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:03:30.588405  188266 cri.go:89] found id: "701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:30.588459  188266 cri.go:89] found id: "23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:30.588465  188266 cri.go:89] found id: ""
	I0731 21:03:30.588474  188266 logs.go:276] 2 containers: [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f]
	I0731 21:03:30.588539  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.593611  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.599345  188266 logs.go:123] Gathering logs for kube-proxy [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e] ...
	I0731 21:03:30.599386  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:30.641530  188266 logs.go:123] Gathering logs for kube-controller-manager [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085] ...
	I0731 21:03:30.641564  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:30.703655  188266 logs.go:123] Gathering logs for storage-provisioner [23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f] ...
	I0731 21:03:30.703692  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:30.744119  188266 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:30.744147  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:29.515238  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:32.014503  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:29.351214  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:29.365487  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:29.365561  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:29.402989  188656 cri.go:89] found id: ""
	I0731 21:03:29.403015  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.403022  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:29.403028  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:29.403079  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:29.443276  188656 cri.go:89] found id: ""
	I0731 21:03:29.443310  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.443321  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:29.443329  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:29.443397  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:29.483285  188656 cri.go:89] found id: ""
	I0731 21:03:29.483311  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.483319  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:29.483326  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:29.483384  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:29.522285  188656 cri.go:89] found id: ""
	I0731 21:03:29.522317  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.522329  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:29.522337  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:29.522406  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:29.565115  188656 cri.go:89] found id: ""
	I0731 21:03:29.565145  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.565155  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:29.565163  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:29.565233  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:29.603768  188656 cri.go:89] found id: ""
	I0731 21:03:29.603805  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.603816  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:29.603822  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:29.603875  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:29.640380  188656 cri.go:89] found id: ""
	I0731 21:03:29.640406  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.640416  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:29.640424  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:29.640493  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:29.679699  188656 cri.go:89] found id: ""
	I0731 21:03:29.679727  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.679736  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:29.679749  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:29.679764  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:29.735555  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:29.735603  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:29.749670  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:29.749708  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:29.825950  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:29.825973  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:29.825989  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:29.915420  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:29.915463  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:32.462996  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:32.478659  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:32.478739  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:32.528625  188656 cri.go:89] found id: ""
	I0731 21:03:32.528651  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.528659  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:32.528665  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:32.528724  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:32.574371  188656 cri.go:89] found id: ""
	I0731 21:03:32.574399  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.574408  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:32.574414  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:32.574474  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:32.616916  188656 cri.go:89] found id: ""
	I0731 21:03:32.616960  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.616970  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:32.616975  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:32.617040  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:32.657725  188656 cri.go:89] found id: ""
	I0731 21:03:32.657758  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.657769  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:32.657777  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:32.657842  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:32.693197  188656 cri.go:89] found id: ""
	I0731 21:03:32.693226  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.693237  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:32.693245  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:32.693316  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:32.733567  188656 cri.go:89] found id: ""
	I0731 21:03:32.733594  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.733602  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:32.733608  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:32.733670  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:32.774624  188656 cri.go:89] found id: ""
	I0731 21:03:32.774659  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.774671  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:32.774679  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:32.774747  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:32.811755  188656 cri.go:89] found id: ""
	I0731 21:03:32.811790  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.811809  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:32.811822  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:32.811835  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:32.825512  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:32.825544  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:32.902310  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:32.902339  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:32.902366  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:32.983347  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:32.983391  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:33.028037  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:33.028068  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:31.165988  188266 logs.go:123] Gathering logs for container status ...
	I0731 21:03:31.166042  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:31.209564  188266 logs.go:123] Gathering logs for kube-apiserver [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718] ...
	I0731 21:03:31.209605  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:31.254061  188266 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:31.254105  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:31.269227  188266 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:31.269266  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:03:31.394442  188266 logs.go:123] Gathering logs for etcd [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c] ...
	I0731 21:03:31.394477  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:31.439011  188266 logs.go:123] Gathering logs for coredns [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025] ...
	I0731 21:03:31.439047  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:31.476798  188266 logs.go:123] Gathering logs for kube-scheduler [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447] ...
	I0731 21:03:31.476825  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:31.524460  188266 logs.go:123] Gathering logs for storage-provisioner [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173] ...
	I0731 21:03:31.524491  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:31.564254  188266 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:31.564288  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:34.122836  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 21:03:34.128516  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 200:
	ok
	I0731 21:03:34.129484  188266 api_server.go:141] control plane version: v1.30.3
	I0731 21:03:34.129513  188266 api_server.go:131] duration metric: took 3.888432526s to wait for apiserver health ...
	I0731 21:03:34.129523  188266 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:03:34.129554  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:34.129622  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:34.167751  188266 cri.go:89] found id: "89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:34.167781  188266 cri.go:89] found id: ""
	I0731 21:03:34.167792  188266 logs.go:276] 1 containers: [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718]
	I0731 21:03:34.167860  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.172786  188266 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:34.172858  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:34.212172  188266 cri.go:89] found id: "d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:34.212204  188266 cri.go:89] found id: ""
	I0731 21:03:34.212215  188266 logs.go:276] 1 containers: [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c]
	I0731 21:03:34.212289  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.216651  188266 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:34.216736  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:34.263492  188266 cri.go:89] found id: "987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:34.263515  188266 cri.go:89] found id: ""
	I0731 21:03:34.263528  188266 logs.go:276] 1 containers: [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025]
	I0731 21:03:34.263592  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.268548  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:34.268630  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:34.309420  188266 cri.go:89] found id: "936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:34.309453  188266 cri.go:89] found id: ""
	I0731 21:03:34.309463  188266 logs.go:276] 1 containers: [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447]
	I0731 21:03:34.309529  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.313921  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:34.313993  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:34.354712  188266 cri.go:89] found id: "c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:34.354740  188266 cri.go:89] found id: ""
	I0731 21:03:34.354754  188266 logs.go:276] 1 containers: [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e]
	I0731 21:03:34.354818  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.359363  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:34.359446  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:34.397596  188266 cri.go:89] found id: "c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:34.397622  188266 cri.go:89] found id: ""
	I0731 21:03:34.397634  188266 logs.go:276] 1 containers: [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085]
	I0731 21:03:34.397710  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.402126  188266 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:34.402207  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:34.447198  188266 cri.go:89] found id: ""
	I0731 21:03:34.447234  188266 logs.go:276] 0 containers: []
	W0731 21:03:34.447242  188266 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:34.447248  188266 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:03:34.447304  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:03:34.487429  188266 cri.go:89] found id: "701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:34.487452  188266 cri.go:89] found id: "23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:34.487457  188266 cri.go:89] found id: ""
	I0731 21:03:34.487464  188266 logs.go:276] 2 containers: [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f]
	I0731 21:03:34.487519  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.494362  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.499409  188266 logs.go:123] Gathering logs for etcd [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c] ...
	I0731 21:03:34.499438  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:34.549761  188266 logs.go:123] Gathering logs for kube-proxy [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e] ...
	I0731 21:03:34.549802  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:34.588571  188266 logs.go:123] Gathering logs for kube-controller-manager [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085] ...
	I0731 21:03:34.588603  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:34.646590  188266 logs.go:123] Gathering logs for storage-provisioner [23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f] ...
	I0731 21:03:34.646635  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:34.691320  188266 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:34.691353  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:35.098975  188266 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:35.099018  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:35.153924  188266 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:35.153964  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:35.168091  188266 logs.go:123] Gathering logs for kube-apiserver [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718] ...
	I0731 21:03:35.168121  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:35.214469  188266 logs.go:123] Gathering logs for container status ...
	I0731 21:03:35.214511  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:35.260694  188266 logs.go:123] Gathering logs for storage-provisioner [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173] ...
	I0731 21:03:35.260724  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:35.299230  188266 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:35.299261  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:03:35.413598  188266 logs.go:123] Gathering logs for coredns [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025] ...
	I0731 21:03:35.413635  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:35.451331  188266 logs.go:123] Gathering logs for kube-scheduler [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447] ...
	I0731 21:03:35.451359  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:35.582896  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:35.597483  188656 kubeadm.go:597] duration metric: took 4m3.860422558s to restartPrimaryControlPlane
	W0731 21:03:35.597559  188656 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 21:03:35.597598  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:03:36.054326  188656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:03:36.070199  188656 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:03:36.081882  188656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:03:36.093300  188656 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:03:36.093322  188656 kubeadm.go:157] found existing configuration files:
	
	I0731 21:03:36.093396  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:03:36.103781  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:03:36.103843  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:03:36.114702  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:03:36.125213  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:03:36.125299  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:03:36.136299  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:03:36.146441  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:03:36.146520  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:03:36.157524  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:03:36.168247  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:03:36.168327  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:03:36.178875  188656 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:03:36.253662  188656 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 21:03:36.253804  188656 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:03:36.401385  188656 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:03:36.401550  188656 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:03:36.401686  188656 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:03:36.591601  188656 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:03:34.513632  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:36.515043  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:36.593492  188656 out.go:204]   - Generating certificates and keys ...
	I0731 21:03:36.593604  188656 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:03:36.593690  188656 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:03:36.593817  188656 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:03:36.593907  188656 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:03:36.594011  188656 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:03:36.594090  188656 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:03:36.594215  188656 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:03:36.594602  188656 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:03:36.595122  188656 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:03:36.595323  188656 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:03:36.595414  188656 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:03:36.595548  188656 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:03:37.052958  188656 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:03:37.178980  188656 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:03:37.375085  188656 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:03:37.550735  188656 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:03:37.571991  188656 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:03:37.575050  188656 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:03:37.575227  188656 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:03:37.707194  188656 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:03:37.997696  188266 system_pods.go:59] 8 kube-system pods found
	I0731 21:03:37.997725  188266 system_pods.go:61] "coredns-7db6d8ff4d-gnrgs" [203ddf96-11cf-4fd3-8920-aa787815ad1a] Running
	I0731 21:03:37.997730  188266 system_pods.go:61] "etcd-default-k8s-diff-port-125614" [7b9a74f5-b8df-457d-b4be-26e3f268e74a] Running
	I0731 21:03:37.997734  188266 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-125614" [a2db9a6a-1d7f-4bb6-9f54-2d74676bba7f] Running
	I0731 21:03:37.997738  188266 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-125614" [3cd52038-e450-46ea-91a7-494bdfeb386e] Running
	I0731 21:03:37.997741  188266 system_pods.go:61] "kube-proxy-csdc4" [24077c7d-f54c-4a54-9791-742327f2a9d0] Running
	I0731 21:03:37.997744  188266 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-125614" [4b9b4f87-74a3-4768-b3ca-3226a4db7105] Running
	I0731 21:03:37.997750  188266 system_pods.go:61] "metrics-server-569cc877fc-jf52w" [00b07830-8180-43c0-83c7-e68d399ae0ef] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:03:37.997754  188266 system_pods.go:61] "storage-provisioner" [efc60c19-af1b-426e-82e2-5fb9a2d1fb3a] Running
	I0731 21:03:37.997762  188266 system_pods.go:74] duration metric: took 3.868231958s to wait for pod list to return data ...
	I0731 21:03:37.997773  188266 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:03:38.000640  188266 default_sa.go:45] found service account: "default"
	I0731 21:03:38.000665  188266 default_sa.go:55] duration metric: took 2.88647ms for default service account to be created ...
	I0731 21:03:38.000672  188266 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:03:38.007107  188266 system_pods.go:86] 8 kube-system pods found
	I0731 21:03:38.007132  188266 system_pods.go:89] "coredns-7db6d8ff4d-gnrgs" [203ddf96-11cf-4fd3-8920-aa787815ad1a] Running
	I0731 21:03:38.007137  188266 system_pods.go:89] "etcd-default-k8s-diff-port-125614" [7b9a74f5-b8df-457d-b4be-26e3f268e74a] Running
	I0731 21:03:38.007142  188266 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-125614" [a2db9a6a-1d7f-4bb6-9f54-2d74676bba7f] Running
	I0731 21:03:38.007146  188266 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-125614" [3cd52038-e450-46ea-91a7-494bdfeb386e] Running
	I0731 21:03:38.007152  188266 system_pods.go:89] "kube-proxy-csdc4" [24077c7d-f54c-4a54-9791-742327f2a9d0] Running
	I0731 21:03:38.007158  188266 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-125614" [4b9b4f87-74a3-4768-b3ca-3226a4db7105] Running
	I0731 21:03:38.007164  188266 system_pods.go:89] "metrics-server-569cc877fc-jf52w" [00b07830-8180-43c0-83c7-e68d399ae0ef] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:03:38.007168  188266 system_pods.go:89] "storage-provisioner" [efc60c19-af1b-426e-82e2-5fb9a2d1fb3a] Running
	I0731 21:03:38.007175  188266 system_pods.go:126] duration metric: took 6.498733ms to wait for k8s-apps to be running ...
	I0731 21:03:38.007183  188266 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:03:38.007240  188266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:03:38.026906  188266 system_svc.go:56] duration metric: took 19.708653ms WaitForService to wait for kubelet
	I0731 21:03:38.026938  188266 kubeadm.go:582] duration metric: took 4m21.965767608s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:03:38.026969  188266 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:03:38.030479  188266 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:03:38.030554  188266 node_conditions.go:123] node cpu capacity is 2
	I0731 21:03:38.030577  188266 node_conditions.go:105] duration metric: took 3.601933ms to run NodePressure ...
	I0731 21:03:38.030600  188266 start.go:241] waiting for startup goroutines ...
	I0731 21:03:38.030611  188266 start.go:246] waiting for cluster config update ...
	I0731 21:03:38.030626  188266 start.go:255] writing updated cluster config ...
	I0731 21:03:38.031028  188266 ssh_runner.go:195] Run: rm -f paused
	I0731 21:03:38.082629  188266 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 21:03:38.084590  188266 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-125614" cluster and "default" namespace by default
	I0731 21:03:37.709295  188656 out.go:204]   - Booting up control plane ...
	I0731 21:03:37.709427  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:03:37.722549  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:03:37.723455  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:03:37.724194  188656 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:03:37.726323  188656 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 21:03:39.013773  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:41.016158  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:44.360883  188133 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.27764632s)
	I0731 21:03:44.360955  188133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:03:44.379069  188133 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:03:44.389518  188133 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:03:44.400223  188133 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:03:44.400250  188133 kubeadm.go:157] found existing configuration files:
	
	I0731 21:03:44.400302  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:03:44.410644  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:03:44.410718  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:03:44.421136  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:03:44.431161  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:03:44.431231  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:03:44.441936  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:03:44.451761  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:03:44.451820  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:03:44.462692  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:03:44.472982  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:03:44.473050  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:03:44.482980  188133 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:03:44.532539  188133 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0731 21:03:44.532637  188133 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:03:44.651505  188133 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:03:44.651654  188133 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:03:44.651772  188133 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0731 21:03:44.660564  188133 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:03:44.662559  188133 out.go:204]   - Generating certificates and keys ...
	I0731 21:03:44.662676  188133 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:03:44.662765  188133 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:03:44.662878  188133 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:03:44.662971  188133 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:03:44.663073  188133 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:03:44.663142  188133 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:03:44.663218  188133 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:03:44.663293  188133 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:03:44.663389  188133 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:03:44.663527  188133 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:03:44.663587  188133 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:03:44.663679  188133 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:03:44.813556  188133 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:03:44.908380  188133 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 21:03:45.005215  188133 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:03:45.138446  188133 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:03:45.222892  188133 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:03:45.223622  188133 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:03:45.226748  188133 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:03:43.513039  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:45.513901  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:45.228799  188133 out.go:204]   - Booting up control plane ...
	I0731 21:03:45.228934  188133 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:03:45.229087  188133 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:03:45.230021  188133 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:03:45.249145  188133 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:03:45.258184  188133 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:03:45.258267  188133 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:03:45.392726  188133 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 21:03:45.392852  188133 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 21:03:45.899754  188133 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 506.694095ms
	I0731 21:03:45.899857  188133 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 21:03:51.901713  188133 kubeadm.go:310] [api-check] The API server is healthy after 6.00194457s
	I0731 21:03:51.914947  188133 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 21:03:51.932510  188133 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 21:03:51.971055  188133 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 21:03:51.971273  188133 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-916885 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 21:03:51.985104  188133 kubeadm.go:310] [bootstrap-token] Using token: q86dx8.9ipyjyidvcwogxce
	I0731 21:03:47.515248  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:50.016206  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:51.986447  188133 out.go:204]   - Configuring RBAC rules ...
	I0731 21:03:51.986576  188133 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 21:03:51.993910  188133 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 21:03:52.002474  188133 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 21:03:52.007035  188133 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 21:03:52.011708  188133 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 21:03:52.020500  188133 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 21:03:52.310057  188133 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 21:03:52.778266  188133 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 21:03:53.308425  188133 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 21:03:53.309509  188133 kubeadm.go:310] 
	I0731 21:03:53.309585  188133 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 21:03:53.309597  188133 kubeadm.go:310] 
	I0731 21:03:53.309686  188133 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 21:03:53.309694  188133 kubeadm.go:310] 
	I0731 21:03:53.309715  188133 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 21:03:53.309771  188133 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 21:03:53.309875  188133 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 21:03:53.309894  188133 kubeadm.go:310] 
	I0731 21:03:53.310007  188133 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 21:03:53.310027  188133 kubeadm.go:310] 
	I0731 21:03:53.310088  188133 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 21:03:53.310099  188133 kubeadm.go:310] 
	I0731 21:03:53.310164  188133 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 21:03:53.310275  188133 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 21:03:53.310371  188133 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 21:03:53.310396  188133 kubeadm.go:310] 
	I0731 21:03:53.310499  188133 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 21:03:53.310601  188133 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 21:03:53.310611  188133 kubeadm.go:310] 
	I0731 21:03:53.310735  188133 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token q86dx8.9ipyjyidvcwogxce \
	I0731 21:03:53.310910  188133 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:227a67c5218b331dd5f7132ea28d97c9a8049ca33dc9adbb2077964e102f3559 \
	I0731 21:03:53.310961  188133 kubeadm.go:310] 	--control-plane 
	I0731 21:03:53.310970  188133 kubeadm.go:310] 
	I0731 21:03:53.311078  188133 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 21:03:53.311092  188133 kubeadm.go:310] 
	I0731 21:03:53.311222  188133 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token q86dx8.9ipyjyidvcwogxce \
	I0731 21:03:53.311402  188133 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:227a67c5218b331dd5f7132ea28d97c9a8049ca33dc9adbb2077964e102f3559 
	I0731 21:03:53.312409  188133 kubeadm.go:310] W0731 21:03:44.497219    2941 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0731 21:03:53.312703  188133 kubeadm.go:310] W0731 21:03:44.498106    2941 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0731 21:03:53.312811  188133 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:03:53.312857  188133 cni.go:84] Creating CNI manager for ""
	I0731 21:03:53.312870  188133 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:03:53.315035  188133 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:03:53.316406  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:03:53.327870  188133 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:03:53.352757  188133 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:03:53.352902  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:53.352919  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-916885 minikube.k8s.io/updated_at=2024_07_31T21_03_53_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825 minikube.k8s.io/name=no-preload-916885 minikube.k8s.io/primary=true
	I0731 21:03:53.403275  188133 ops.go:34] apiserver oom_adj: -16
	I0731 21:03:53.579520  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:54.080457  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:54.579898  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:55.080464  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:55.580211  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:56.080518  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:56.579806  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:57.080302  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:57.181987  188133 kubeadm.go:1113] duration metric: took 3.829153755s to wait for elevateKubeSystemPrivileges
	I0731 21:03:57.182024  188133 kubeadm.go:394] duration metric: took 4m59.623631766s to StartCluster
	I0731 21:03:57.182051  188133 settings.go:142] acquiring lock: {Name:mk1f43113b3e416d694056e4890ac819b020378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:03:57.182160  188133 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 21:03:57.185297  188133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:03:57.185586  188133 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.239 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:03:57.185672  188133 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:03:57.185753  188133 addons.go:69] Setting storage-provisioner=true in profile "no-preload-916885"
	I0731 21:03:57.185776  188133 addons.go:69] Setting default-storageclass=true in profile "no-preload-916885"
	I0731 21:03:57.185797  188133 addons.go:69] Setting metrics-server=true in profile "no-preload-916885"
	I0731 21:03:57.185825  188133 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-916885"
	I0731 21:03:57.185844  188133 addons.go:234] Setting addon metrics-server=true in "no-preload-916885"
	W0731 21:03:57.185856  188133 addons.go:243] addon metrics-server should already be in state true
	I0731 21:03:57.185864  188133 config.go:182] Loaded profile config "no-preload-916885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 21:03:57.185889  188133 host.go:66] Checking if "no-preload-916885" exists ...
	I0731 21:03:57.185785  188133 addons.go:234] Setting addon storage-provisioner=true in "no-preload-916885"
	W0731 21:03:57.185926  188133 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:03:57.185956  188133 host.go:66] Checking if "no-preload-916885" exists ...
	I0731 21:03:57.186201  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.186226  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.186247  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.186279  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.186301  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.186345  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.187280  188133 out.go:177] * Verifying Kubernetes components...
	I0731 21:03:57.188864  188133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:03:57.202393  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35433
	I0731 21:03:57.202431  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41921
	I0731 21:03:57.202856  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.202946  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.203416  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.203434  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.203688  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.203707  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.203829  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.204081  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.204270  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetState
	I0731 21:03:57.204428  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.204462  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.204960  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39305
	I0731 21:03:57.205722  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.206275  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.206291  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.208245  188133 addons.go:234] Setting addon default-storageclass=true in "no-preload-916885"
	W0731 21:03:57.208264  188133 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:03:57.208296  188133 host.go:66] Checking if "no-preload-916885" exists ...
	I0731 21:03:57.208640  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.208663  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.208866  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.209432  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.209458  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.222235  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44595
	I0731 21:03:57.222835  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.223408  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.223429  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.224137  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.224366  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetState
	I0731 21:03:57.226564  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 21:03:57.227398  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
	I0731 21:03:57.227842  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.228377  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.228399  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.228427  188133 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:03:57.228836  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.229521  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.229573  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.230036  188133 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:03:57.230056  188133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:03:57.230075  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 21:03:57.230207  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46451
	I0731 21:03:57.230601  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.230993  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.231008  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.231323  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.231519  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetState
	I0731 21:03:57.233542  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 21:03:57.235239  188133 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 21:03:52.514632  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:55.014017  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:57.235631  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.236081  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 21:03:57.236105  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.236374  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 21:03:57.236478  188133 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:03:57.236493  188133 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:03:57.236510  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 21:03:57.236545  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 21:03:57.236711  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 21:03:57.236824  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 21:03:57.238988  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.239335  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 21:03:57.239361  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.239482  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 21:03:57.239645  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 21:03:57.239775  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 21:03:57.239902  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 21:03:57.252386  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40865
	I0731 21:03:57.252846  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.253454  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.253474  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.253837  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.254048  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetState
	I0731 21:03:57.255784  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 21:03:57.256020  188133 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:03:57.256037  188133 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:03:57.256057  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 21:03:57.258870  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.259220  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 21:03:57.259254  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.259446  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 21:03:57.259612  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 21:03:57.259783  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 21:03:57.259940  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 21:03:57.405243  188133 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:03:57.426852  188133 node_ready.go:35] waiting up to 6m0s for node "no-preload-916885" to be "Ready" ...
	I0731 21:03:57.494325  188133 node_ready.go:49] node "no-preload-916885" has status "Ready":"True"
	I0731 21:03:57.494352  188133 node_ready.go:38] duration metric: took 67.471516ms for node "no-preload-916885" to be "Ready" ...
	I0731 21:03:57.494365  188133 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:03:57.497819  188133 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:03:57.497849  188133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 21:03:57.528118  188133 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:03:57.528148  188133 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:03:57.557889  188133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:03:57.568872  188133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:03:57.583099  188133 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-9qnjq" in "kube-system" namespace to be "Ready" ...
	I0731 21:03:57.587315  188133 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:03:57.587342  188133 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:03:57.645504  188133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:03:58.515635  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.515650  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.515667  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.515675  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.516054  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.516100  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.516117  188133 main.go:141] libmachine: (no-preload-916885) DBG | Closing plugin on server side
	I0731 21:03:58.516128  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.516128  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.516161  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.516187  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.516141  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.516213  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.516097  188133 main.go:141] libmachine: (no-preload-916885) DBG | Closing plugin on server side
	I0731 21:03:58.516431  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.516444  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.517889  188133 main.go:141] libmachine: (no-preload-916885) DBG | Closing plugin on server side
	I0731 21:03:58.517914  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.517930  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.569097  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.569120  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.569463  188133 main.go:141] libmachine: (no-preload-916885) DBG | Closing plugin on server side
	I0731 21:03:58.569511  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.569520  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.726076  188133 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.080526254s)
	I0731 21:03:58.726140  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.726153  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.726469  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.726490  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.726501  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.726514  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.728603  188133 main.go:141] libmachine: (no-preload-916885) DBG | Closing plugin on server side
	I0731 21:03:58.728666  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.728688  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.728715  188133 addons.go:475] Verifying addon metrics-server=true in "no-preload-916885"
	I0731 21:03:58.730520  188133 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 21:03:58.731823  188133 addons.go:510] duration metric: took 1.546157188s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 21:03:57.515366  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:59.515730  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:02.013803  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:59.593082  188133 pod_ready.go:102] pod "coredns-5cfdc65f69-9qnjq" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:00.589165  188133 pod_ready.go:92] pod "coredns-5cfdc65f69-9qnjq" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:00.589192  188133 pod_ready.go:81] duration metric: took 3.00606369s for pod "coredns-5cfdc65f69-9qnjq" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:00.589204  188133 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-bqgfg" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:02.597316  188133 pod_ready.go:102] pod "coredns-5cfdc65f69-bqgfg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:05.096168  188133 pod_ready.go:102] pod "coredns-5cfdc65f69-bqgfg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:05.597832  188133 pod_ready.go:92] pod "coredns-5cfdc65f69-bqgfg" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.597857  188133 pod_ready.go:81] duration metric: took 5.008646335s for pod "coredns-5cfdc65f69-bqgfg" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.597866  188133 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.603105  188133 pod_ready.go:92] pod "etcd-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.603128  188133 pod_ready.go:81] duration metric: took 5.254251ms for pod "etcd-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.603140  188133 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.610748  188133 pod_ready.go:92] pod "kube-apiserver-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.610771  188133 pod_ready.go:81] duration metric: took 7.623438ms for pod "kube-apiserver-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.610782  188133 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.615949  188133 pod_ready.go:92] pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.615966  188133 pod_ready.go:81] duration metric: took 5.176213ms for pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.615975  188133 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b4h2z" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.620431  188133 pod_ready.go:92] pod "kube-proxy-b4h2z" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.620450  188133 pod_ready.go:81] duration metric: took 4.469258ms for pod "kube-proxy-b4h2z" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.620458  188133 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.993080  188133 pod_ready.go:92] pod "kube-scheduler-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.993104  188133 pod_ready.go:81] duration metric: took 372.640001ms for pod "kube-scheduler-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.993112  188133 pod_ready.go:38] duration metric: took 8.498733061s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:04:05.993125  188133 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:04:05.993186  188133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:04:06.009952  188133 api_server.go:72] duration metric: took 8.824325154s to wait for apiserver process to appear ...
	I0731 21:04:06.009981  188133 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:04:06.010001  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 21:04:06.014715  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 200:
	ok
	I0731 21:04:06.015917  188133 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 21:04:06.015944  188133 api_server.go:131] duration metric: took 5.952931ms to wait for apiserver health ...
	I0731 21:04:06.015954  188133 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:04:06.196874  188133 system_pods.go:59] 9 kube-system pods found
	I0731 21:04:06.196907  188133 system_pods.go:61] "coredns-5cfdc65f69-9qnjq" [2350f15d-0e3d-429f-a21f-8cbd41407d7e] Running
	I0731 21:04:06.196914  188133 system_pods.go:61] "coredns-5cfdc65f69-bqgfg" [9010990b-36d5-4c0d-adc9-5d9483bd5d44] Running
	I0731 21:04:06.196918  188133 system_pods.go:61] "etcd-no-preload-916885" [951e730b-b153-4f75-9f7f-82d774e01853] Running
	I0731 21:04:06.196923  188133 system_pods.go:61] "kube-apiserver-no-preload-916885" [c53d3e94-2b2d-4ad5-a0a2-54c519a4c907] Running
	I0731 21:04:06.196929  188133 system_pods.go:61] "kube-controller-manager-no-preload-916885" [8de7eaf4-d6e7-41dc-a206-645821682ab2] Running
	I0731 21:04:06.196933  188133 system_pods.go:61] "kube-proxy-b4h2z" [328ebd98-accf-43da-ae60-40fc93f34116] Running
	I0731 21:04:06.196938  188133 system_pods.go:61] "kube-scheduler-no-preload-916885" [e6d18e4c-8e0d-4332-8fc3-2696261447ac] Running
	I0731 21:04:06.196945  188133 system_pods.go:61] "metrics-server-78fcd8795b-86m8h" [3c4df12a-3d52-48dc-9998-587565d13dca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:04:06.196950  188133 system_pods.go:61] "storage-provisioner" [6bfc781b-1370-4460-8018-a1279e37b39d] Running
	I0731 21:04:06.196960  188133 system_pods.go:74] duration metric: took 180.999269ms to wait for pod list to return data ...
	I0731 21:04:06.196970  188133 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:04:06.394499  188133 default_sa.go:45] found service account: "default"
	I0731 21:04:06.394530  188133 default_sa.go:55] duration metric: took 197.552628ms for default service account to be created ...
	I0731 21:04:06.394539  188133 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:04:06.598314  188133 system_pods.go:86] 9 kube-system pods found
	I0731 21:04:06.598345  188133 system_pods.go:89] "coredns-5cfdc65f69-9qnjq" [2350f15d-0e3d-429f-a21f-8cbd41407d7e] Running
	I0731 21:04:06.598354  188133 system_pods.go:89] "coredns-5cfdc65f69-bqgfg" [9010990b-36d5-4c0d-adc9-5d9483bd5d44] Running
	I0731 21:04:06.598361  188133 system_pods.go:89] "etcd-no-preload-916885" [951e730b-b153-4f75-9f7f-82d774e01853] Running
	I0731 21:04:06.598370  188133 system_pods.go:89] "kube-apiserver-no-preload-916885" [c53d3e94-2b2d-4ad5-a0a2-54c519a4c907] Running
	I0731 21:04:06.598376  188133 system_pods.go:89] "kube-controller-manager-no-preload-916885" [8de7eaf4-d6e7-41dc-a206-645821682ab2] Running
	I0731 21:04:06.598389  188133 system_pods.go:89] "kube-proxy-b4h2z" [328ebd98-accf-43da-ae60-40fc93f34116] Running
	I0731 21:04:06.598397  188133 system_pods.go:89] "kube-scheduler-no-preload-916885" [e6d18e4c-8e0d-4332-8fc3-2696261447ac] Running
	I0731 21:04:06.598408  188133 system_pods.go:89] "metrics-server-78fcd8795b-86m8h" [3c4df12a-3d52-48dc-9998-587565d13dca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:04:06.598419  188133 system_pods.go:89] "storage-provisioner" [6bfc781b-1370-4460-8018-a1279e37b39d] Running
	I0731 21:04:06.598430  188133 system_pods.go:126] duration metric: took 203.884264ms to wait for k8s-apps to be running ...
	I0731 21:04:06.598442  188133 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:04:06.598498  188133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:04:06.613642  188133 system_svc.go:56] duration metric: took 15.190132ms WaitForService to wait for kubelet
	I0731 21:04:06.613675  188133 kubeadm.go:582] duration metric: took 9.4280531s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:04:06.613705  188133 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:04:06.794163  188133 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:04:06.794191  188133 node_conditions.go:123] node cpu capacity is 2
	I0731 21:04:06.794204  188133 node_conditions.go:105] duration metric: took 180.492992ms to run NodePressure ...
	I0731 21:04:06.794218  188133 start.go:241] waiting for startup goroutines ...
	I0731 21:04:06.794227  188133 start.go:246] waiting for cluster config update ...
	I0731 21:04:06.794239  188133 start.go:255] writing updated cluster config ...
	I0731 21:04:06.794547  188133 ssh_runner.go:195] Run: rm -f paused
	I0731 21:04:06.844118  188133 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0731 21:04:06.846234  188133 out.go:177] * Done! kubectl is now configured to use "no-preload-916885" cluster and "default" namespace by default
	I0731 21:04:04.015079  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:06.514907  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:08.514958  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:11.014341  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:13.514956  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:14.014985  187862 pod_ready.go:81] duration metric: took 4m0.007784922s for pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace to be "Ready" ...
	E0731 21:04:14.015013  187862 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0731 21:04:14.015020  187862 pod_ready.go:38] duration metric: took 4m6.056814749s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:04:14.015034  187862 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:04:14.015079  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:04:14.015127  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:04:14.086254  187862 cri.go:89] found id: "dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:14.086283  187862 cri.go:89] found id: ""
	I0731 21:04:14.086293  187862 logs.go:276] 1 containers: [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473]
	I0731 21:04:14.086368  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.091267  187862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:04:14.091334  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:04:14.138577  187862 cri.go:89] found id: "7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:14.138613  187862 cri.go:89] found id: ""
	I0731 21:04:14.138624  187862 logs.go:276] 1 containers: [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e]
	I0731 21:04:14.138696  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.143245  187862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:04:14.143315  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:04:14.182295  187862 cri.go:89] found id: "1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:14.182325  187862 cri.go:89] found id: ""
	I0731 21:04:14.182336  187862 logs.go:276] 1 containers: [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084]
	I0731 21:04:14.182400  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.186861  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:04:14.186936  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:04:14.230524  187862 cri.go:89] found id: "3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:14.230547  187862 cri.go:89] found id: ""
	I0731 21:04:14.230555  187862 logs.go:276] 1 containers: [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5]
	I0731 21:04:14.230609  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.235285  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:04:14.235354  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:04:14.279188  187862 cri.go:89] found id: "b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:14.279209  187862 cri.go:89] found id: ""
	I0731 21:04:14.279217  187862 logs.go:276] 1 containers: [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845]
	I0731 21:04:14.279268  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.284280  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:04:14.284362  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:04:14.333736  187862 cri.go:89] found id: "0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:14.333764  187862 cri.go:89] found id: ""
	I0731 21:04:14.333774  187862 logs.go:276] 1 containers: [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f]
	I0731 21:04:14.333830  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.338652  187862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:04:14.338717  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:04:14.380632  187862 cri.go:89] found id: ""
	I0731 21:04:14.380663  187862 logs.go:276] 0 containers: []
	W0731 21:04:14.380672  187862 logs.go:278] No container was found matching "kindnet"
	I0731 21:04:14.380678  187862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:04:14.380747  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:04:14.424705  187862 cri.go:89] found id: "919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:14.424727  187862 cri.go:89] found id: "c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:14.424732  187862 cri.go:89] found id: ""
	I0731 21:04:14.424741  187862 logs.go:276] 2 containers: [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2]
	I0731 21:04:14.424801  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.429310  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.434243  187862 logs.go:123] Gathering logs for kubelet ...
	I0731 21:04:14.434267  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:04:14.490743  187862 logs.go:123] Gathering logs for etcd [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e] ...
	I0731 21:04:14.490782  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:14.536575  187862 logs.go:123] Gathering logs for coredns [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084] ...
	I0731 21:04:14.536613  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:14.585952  187862 logs.go:123] Gathering logs for kube-proxy [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845] ...
	I0731 21:04:14.585986  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:14.626198  187862 logs.go:123] Gathering logs for container status ...
	I0731 21:04:14.626228  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:04:14.672674  187862 logs.go:123] Gathering logs for storage-provisioner [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb] ...
	I0731 21:04:14.672712  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:14.711759  187862 logs.go:123] Gathering logs for storage-provisioner [c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2] ...
	I0731 21:04:14.711788  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:14.757020  187862 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:04:14.757047  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:04:15.286344  187862 logs.go:123] Gathering logs for dmesg ...
	I0731 21:04:15.286393  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:04:15.301933  187862 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:04:15.301969  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:04:15.451532  187862 logs.go:123] Gathering logs for kube-apiserver [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473] ...
	I0731 21:04:15.451566  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:15.502398  187862 logs.go:123] Gathering logs for kube-scheduler [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5] ...
	I0731 21:04:15.502443  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:15.544678  187862 logs.go:123] Gathering logs for kube-controller-manager [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f] ...
	I0731 21:04:15.544719  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:17.729291  188656 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 21:04:17.730290  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:04:17.730512  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:04:18.104050  187862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:04:18.121028  187862 api_server.go:72] duration metric: took 4m17.382743031s to wait for apiserver process to appear ...
	I0731 21:04:18.121061  187862 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:04:18.121109  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:04:18.121179  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:04:18.165472  187862 cri.go:89] found id: "dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:18.165498  187862 cri.go:89] found id: ""
	I0731 21:04:18.165507  187862 logs.go:276] 1 containers: [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473]
	I0731 21:04:18.165559  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.169592  187862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:04:18.169663  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:04:18.216918  187862 cri.go:89] found id: "7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:18.216942  187862 cri.go:89] found id: ""
	I0731 21:04:18.216951  187862 logs.go:276] 1 containers: [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e]
	I0731 21:04:18.217015  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.221467  187862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:04:18.221546  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:04:18.267066  187862 cri.go:89] found id: "1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:18.267089  187862 cri.go:89] found id: ""
	I0731 21:04:18.267098  187862 logs.go:276] 1 containers: [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084]
	I0731 21:04:18.267164  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.271583  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:04:18.271662  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:04:18.316381  187862 cri.go:89] found id: "3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:18.316404  187862 cri.go:89] found id: ""
	I0731 21:04:18.316412  187862 logs.go:276] 1 containers: [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5]
	I0731 21:04:18.316472  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.320859  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:04:18.320932  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:04:18.365366  187862 cri.go:89] found id: "b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:18.365396  187862 cri.go:89] found id: ""
	I0731 21:04:18.365410  187862 logs.go:276] 1 containers: [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845]
	I0731 21:04:18.365476  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.369933  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:04:18.370019  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:04:18.411121  187862 cri.go:89] found id: "0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:18.411143  187862 cri.go:89] found id: ""
	I0731 21:04:18.411152  187862 logs.go:276] 1 containers: [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f]
	I0731 21:04:18.411203  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.415493  187862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:04:18.415561  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:04:18.453040  187862 cri.go:89] found id: ""
	I0731 21:04:18.453069  187862 logs.go:276] 0 containers: []
	W0731 21:04:18.453078  187862 logs.go:278] No container was found matching "kindnet"
	I0731 21:04:18.453085  187862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:04:18.453153  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:04:18.499335  187862 cri.go:89] found id: "919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:18.499359  187862 cri.go:89] found id: "c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:18.499363  187862 cri.go:89] found id: ""
	I0731 21:04:18.499371  187862 logs.go:276] 2 containers: [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2]
	I0731 21:04:18.499446  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.504353  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.508619  187862 logs.go:123] Gathering logs for kubelet ...
	I0731 21:04:18.508640  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:04:18.562692  187862 logs.go:123] Gathering logs for kube-apiserver [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473] ...
	I0731 21:04:18.562732  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:18.623405  187862 logs.go:123] Gathering logs for coredns [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084] ...
	I0731 21:04:18.623446  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:18.679472  187862 logs.go:123] Gathering logs for kube-scheduler [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5] ...
	I0731 21:04:18.679510  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:18.728893  187862 logs.go:123] Gathering logs for storage-provisioner [c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2] ...
	I0731 21:04:18.728933  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:18.770963  187862 logs.go:123] Gathering logs for container status ...
	I0731 21:04:18.770994  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:04:18.819353  187862 logs.go:123] Gathering logs for dmesg ...
	I0731 21:04:18.819385  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:04:18.835654  187862 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:04:18.835684  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:04:18.947479  187862 logs.go:123] Gathering logs for etcd [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e] ...
	I0731 21:04:18.947516  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:18.995005  187862 logs.go:123] Gathering logs for kube-proxy [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845] ...
	I0731 21:04:18.995043  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:19.033246  187862 logs.go:123] Gathering logs for kube-controller-manager [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f] ...
	I0731 21:04:19.033274  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:19.092703  187862 logs.go:123] Gathering logs for storage-provisioner [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb] ...
	I0731 21:04:19.092740  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:19.129738  187862 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:04:19.129769  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:04:22.058935  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 21:04:22.063496  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 200:
	ok
	I0731 21:04:22.064670  187862 api_server.go:141] control plane version: v1.30.3
	I0731 21:04:22.064690  187862 api_server.go:131] duration metric: took 3.943623055s to wait for apiserver health ...
	I0731 21:04:22.064699  187862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:04:22.064721  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:04:22.064771  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:04:22.103710  187862 cri.go:89] found id: "dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:22.103733  187862 cri.go:89] found id: ""
	I0731 21:04:22.103741  187862 logs.go:276] 1 containers: [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473]
	I0731 21:04:22.103798  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.108133  187862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:04:22.108203  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:04:22.159120  187862 cri.go:89] found id: "7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:22.159145  187862 cri.go:89] found id: ""
	I0731 21:04:22.159155  187862 logs.go:276] 1 containers: [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e]
	I0731 21:04:22.159213  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.165107  187862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:04:22.165169  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:04:22.202426  187862 cri.go:89] found id: "1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:22.202454  187862 cri.go:89] found id: ""
	I0731 21:04:22.202464  187862 logs.go:276] 1 containers: [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084]
	I0731 21:04:22.202524  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.206785  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:04:22.206842  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:04:22.245008  187862 cri.go:89] found id: "3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:22.245039  187862 cri.go:89] found id: ""
	I0731 21:04:22.245050  187862 logs.go:276] 1 containers: [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5]
	I0731 21:04:22.245111  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.249467  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:04:22.249548  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:04:22.731353  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:04:22.731627  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:04:22.298105  187862 cri.go:89] found id: "b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:22.298135  187862 cri.go:89] found id: ""
	I0731 21:04:22.298145  187862 logs.go:276] 1 containers: [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845]
	I0731 21:04:22.298209  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.302845  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:04:22.302902  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:04:22.346868  187862 cri.go:89] found id: "0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:22.346898  187862 cri.go:89] found id: ""
	I0731 21:04:22.346909  187862 logs.go:276] 1 containers: [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f]
	I0731 21:04:22.346978  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.351246  187862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:04:22.351313  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:04:22.389698  187862 cri.go:89] found id: ""
	I0731 21:04:22.389730  187862 logs.go:276] 0 containers: []
	W0731 21:04:22.389742  187862 logs.go:278] No container was found matching "kindnet"
	I0731 21:04:22.389751  187862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:04:22.389817  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:04:22.425212  187862 cri.go:89] found id: "919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:22.425234  187862 cri.go:89] found id: "c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:22.425238  187862 cri.go:89] found id: ""
	I0731 21:04:22.425245  187862 logs.go:276] 2 containers: [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2]
	I0731 21:04:22.425298  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.429584  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.433471  187862 logs.go:123] Gathering logs for kube-controller-manager [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f] ...
	I0731 21:04:22.433496  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:22.490354  187862 logs.go:123] Gathering logs for storage-provisioner [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb] ...
	I0731 21:04:22.490390  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:22.530117  187862 logs.go:123] Gathering logs for dmesg ...
	I0731 21:04:22.530146  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:04:22.545249  187862 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:04:22.545281  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:04:22.658074  187862 logs.go:123] Gathering logs for kube-apiserver [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473] ...
	I0731 21:04:22.658115  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:22.711537  187862 logs.go:123] Gathering logs for etcd [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e] ...
	I0731 21:04:22.711573  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:22.758644  187862 logs.go:123] Gathering logs for coredns [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084] ...
	I0731 21:04:22.758685  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:22.796716  187862 logs.go:123] Gathering logs for kube-scheduler [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5] ...
	I0731 21:04:22.796751  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:22.843502  187862 logs.go:123] Gathering logs for storage-provisioner [c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2] ...
	I0731 21:04:22.843538  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:22.881738  187862 logs.go:123] Gathering logs for kubelet ...
	I0731 21:04:22.881765  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:04:22.936317  187862 logs.go:123] Gathering logs for kube-proxy [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845] ...
	I0731 21:04:22.936360  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:22.977562  187862 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:04:22.977592  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:04:23.354873  187862 logs.go:123] Gathering logs for container status ...
	I0731 21:04:23.354921  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:04:25.917553  187862 system_pods.go:59] 8 kube-system pods found
	I0731 21:04:25.917588  187862 system_pods.go:61] "coredns-7db6d8ff4d-2ks55" [f5ad9d76-5cdc-430e-8933-7e72a2dda95f] Running
	I0731 21:04:25.917593  187862 system_pods.go:61] "etcd-embed-certs-831240" [5236ad06-90d9-48f1-964a-efa8f56ee8b5] Running
	I0731 21:04:25.917597  187862 system_pods.go:61] "kube-apiserver-embed-certs-831240" [06290f48-0a7b-4d88-9e61-af4c7dde9acd] Running
	I0731 21:04:25.917601  187862 system_pods.go:61] "kube-controller-manager-embed-certs-831240" [5d669fab-16cd-4bc6-a880-a1ecfb5d55b2] Running
	I0731 21:04:25.917604  187862 system_pods.go:61] "kube-proxy-x662j" [9ad0d8a8-94b4-4f3e-b5da-4e5585c28d21] Running
	I0731 21:04:25.917608  187862 system_pods.go:61] "kube-scheduler-embed-certs-831240" [cf9c5922-6468-46e7-84c1-1841f5bc3446] Running
	I0731 21:04:25.917614  187862 system_pods.go:61] "metrics-server-569cc877fc-slbkm" [f93f674b-1f0e-443b-ac06-9c2a5234eeea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:04:25.917624  187862 system_pods.go:61] "storage-provisioner" [d3d5fa24-96e8-4ab5-9887-62ff8b82f21d] Running
	I0731 21:04:25.917635  187862 system_pods.go:74] duration metric: took 3.852929636s to wait for pod list to return data ...
	I0731 21:04:25.917649  187862 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:04:25.920234  187862 default_sa.go:45] found service account: "default"
	I0731 21:04:25.920256  187862 default_sa.go:55] duration metric: took 2.600194ms for default service account to be created ...
	I0731 21:04:25.920264  187862 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:04:25.926296  187862 system_pods.go:86] 8 kube-system pods found
	I0731 21:04:25.926325  187862 system_pods.go:89] "coredns-7db6d8ff4d-2ks55" [f5ad9d76-5cdc-430e-8933-7e72a2dda95f] Running
	I0731 21:04:25.926330  187862 system_pods.go:89] "etcd-embed-certs-831240" [5236ad06-90d9-48f1-964a-efa8f56ee8b5] Running
	I0731 21:04:25.926334  187862 system_pods.go:89] "kube-apiserver-embed-certs-831240" [06290f48-0a7b-4d88-9e61-af4c7dde9acd] Running
	I0731 21:04:25.926338  187862 system_pods.go:89] "kube-controller-manager-embed-certs-831240" [5d669fab-16cd-4bc6-a880-a1ecfb5d55b2] Running
	I0731 21:04:25.926342  187862 system_pods.go:89] "kube-proxy-x662j" [9ad0d8a8-94b4-4f3e-b5da-4e5585c28d21] Running
	I0731 21:04:25.926346  187862 system_pods.go:89] "kube-scheduler-embed-certs-831240" [cf9c5922-6468-46e7-84c1-1841f5bc3446] Running
	I0731 21:04:25.926352  187862 system_pods.go:89] "metrics-server-569cc877fc-slbkm" [f93f674b-1f0e-443b-ac06-9c2a5234eeea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:04:25.926356  187862 system_pods.go:89] "storage-provisioner" [d3d5fa24-96e8-4ab5-9887-62ff8b82f21d] Running
	I0731 21:04:25.926365  187862 system_pods.go:126] duration metric: took 6.094538ms to wait for k8s-apps to be running ...
	I0731 21:04:25.926373  187862 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:04:25.926433  187862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:04:25.945225  187862 system_svc.go:56] duration metric: took 18.837835ms WaitForService to wait for kubelet
	I0731 21:04:25.945264  187862 kubeadm.go:582] duration metric: took 4m25.206984451s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:04:25.945294  187862 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:04:25.948480  187862 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:04:25.948506  187862 node_conditions.go:123] node cpu capacity is 2
	I0731 21:04:25.948520  187862 node_conditions.go:105] duration metric: took 3.219175ms to run NodePressure ...
	I0731 21:04:25.948535  187862 start.go:241] waiting for startup goroutines ...
	I0731 21:04:25.948543  187862 start.go:246] waiting for cluster config update ...
	I0731 21:04:25.948556  187862 start.go:255] writing updated cluster config ...
	I0731 21:04:25.949317  187862 ssh_runner.go:195] Run: rm -f paused
	I0731 21:04:26.000525  187862 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 21:04:26.002719  187862 out.go:177] * Done! kubectl is now configured to use "embed-certs-831240" cluster and "default" namespace by default
	I0731 21:04:32.732572  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:04:32.732835  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:04:52.734257  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:04:52.734530  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:05:32.739465  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:05:32.739778  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:05:32.739796  188656 kubeadm.go:310] 
	I0731 21:05:32.739854  188656 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 21:05:32.739962  188656 kubeadm.go:310] 		timed out waiting for the condition
	I0731 21:05:32.739988  188656 kubeadm.go:310] 
	I0731 21:05:32.740034  188656 kubeadm.go:310] 	This error is likely caused by:
	I0731 21:05:32.740083  188656 kubeadm.go:310] 		- The kubelet is not running
	I0731 21:05:32.740230  188656 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 21:05:32.740245  188656 kubeadm.go:310] 
	I0731 21:05:32.740393  188656 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 21:05:32.740441  188656 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 21:05:32.740485  188656 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 21:05:32.740494  188656 kubeadm.go:310] 
	I0731 21:05:32.740624  188656 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 21:05:32.740741  188656 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 21:05:32.740752  188656 kubeadm.go:310] 
	I0731 21:05:32.740888  188656 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 21:05:32.741008  188656 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 21:05:32.741084  188656 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 21:05:32.741145  188656 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 21:05:32.741152  188656 kubeadm.go:310] 
	I0731 21:05:32.741834  188656 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:05:32.741967  188656 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 21:05:32.742066  188656 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0731 21:05:32.742264  188656 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0731 21:05:32.742340  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:05:33.227380  188656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:05:33.243864  188656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:05:33.254208  188656 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:05:33.254234  188656 kubeadm.go:157] found existing configuration files:
	
	I0731 21:05:33.254313  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:05:33.264766  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:05:33.264846  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:05:33.275517  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:05:33.286281  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:05:33.286358  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:05:33.297108  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:05:33.307555  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:05:33.307627  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:05:33.318193  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:05:33.328155  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:05:33.328220  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:05:33.338088  188656 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:05:33.569897  188656 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:07:29.725230  188656 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 21:07:29.725381  188656 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 21:07:29.726868  188656 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 21:07:29.726959  188656 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:07:29.727064  188656 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:07:29.727204  188656 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:07:29.727322  188656 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:07:29.727389  188656 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:07:29.729525  188656 out.go:204]   - Generating certificates and keys ...
	I0731 21:07:29.729659  188656 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:07:29.729761  188656 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:07:29.729918  188656 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:07:29.730026  188656 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:07:29.730126  188656 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:07:29.730268  188656 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:07:29.730369  188656 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:07:29.730461  188656 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:07:29.730555  188656 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:07:29.730658  188656 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:07:29.730713  188656 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:07:29.730790  188656 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:07:29.730856  188656 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:07:29.730931  188656 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:07:29.731014  188656 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:07:29.731111  188656 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:07:29.731248  188656 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:07:29.731339  188656 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:07:29.731395  188656 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:07:29.731486  188656 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:07:29.733052  188656 out.go:204]   - Booting up control plane ...
	I0731 21:07:29.733146  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:07:29.733226  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:07:29.733305  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:07:29.733454  188656 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:07:29.733656  188656 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 21:07:29.733735  188656 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 21:07:29.733830  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.734048  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.734116  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.734275  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.734331  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.734543  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.734642  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.734868  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.734966  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.735234  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.735252  188656 kubeadm.go:310] 
	I0731 21:07:29.735313  188656 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 21:07:29.735376  188656 kubeadm.go:310] 		timed out waiting for the condition
	I0731 21:07:29.735385  188656 kubeadm.go:310] 
	I0731 21:07:29.735432  188656 kubeadm.go:310] 	This error is likely caused by:
	I0731 21:07:29.735480  188656 kubeadm.go:310] 		- The kubelet is not running
	I0731 21:07:29.735624  188656 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 21:07:29.735634  188656 kubeadm.go:310] 
	I0731 21:07:29.735779  188656 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 21:07:29.735830  188656 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 21:07:29.735879  188656 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 21:07:29.735889  188656 kubeadm.go:310] 
	I0731 21:07:29.736038  188656 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 21:07:29.736129  188656 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 21:07:29.736141  188656 kubeadm.go:310] 
	I0731 21:07:29.736241  188656 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 21:07:29.736315  188656 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 21:07:29.736400  188656 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 21:07:29.736480  188656 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 21:07:29.736537  188656 kubeadm.go:310] 
	I0731 21:07:29.736579  188656 kubeadm.go:394] duration metric: took 7m58.053099483s to StartCluster
	I0731 21:07:29.736660  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:07:29.736793  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:07:29.802897  188656 cri.go:89] found id: ""
	I0731 21:07:29.802932  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.802945  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:07:29.802953  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:07:29.803021  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:07:29.840059  188656 cri.go:89] found id: ""
	I0731 21:07:29.840088  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.840098  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:07:29.840106  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:07:29.840178  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:07:29.881030  188656 cri.go:89] found id: ""
	I0731 21:07:29.881058  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.881066  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:07:29.881073  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:07:29.881150  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:07:29.923495  188656 cri.go:89] found id: ""
	I0731 21:07:29.923524  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.923532  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:07:29.923538  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:07:29.923604  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:07:29.966128  188656 cri.go:89] found id: ""
	I0731 21:07:29.966156  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.966164  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:07:29.966171  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:07:29.966236  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:07:30.007648  188656 cri.go:89] found id: ""
	I0731 21:07:30.007678  188656 logs.go:276] 0 containers: []
	W0731 21:07:30.007687  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:07:30.007693  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:07:30.007748  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:07:30.047857  188656 cri.go:89] found id: ""
	I0731 21:07:30.047887  188656 logs.go:276] 0 containers: []
	W0731 21:07:30.047903  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:07:30.047909  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:07:30.047959  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:07:30.087245  188656 cri.go:89] found id: ""
	I0731 21:07:30.087275  188656 logs.go:276] 0 containers: []
	W0731 21:07:30.087283  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:07:30.087294  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:07:30.087308  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:07:30.168205  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:07:30.168235  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:07:30.168256  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:07:30.276908  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:07:30.276951  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:07:30.322993  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:07:30.323030  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:07:30.375237  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:07:30.375287  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0731 21:07:30.392523  188656 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0731 21:07:30.392579  188656 out.go:239] * 
	W0731 21:07:30.392653  188656 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:07:30.392683  188656 out.go:239] * 
	W0731 21:07:30.393845  188656 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 21:07:30.397498  188656 out.go:177] 
	W0731 21:07:30.398890  188656 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:07:30.398959  188656 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0731 21:07:30.398995  188656 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0731 21:07:30.401295  188656 out.go:177] 
	
	
	==> CRI-O <==
	Jul 31 21:07:32 old-k8s-version-239115 crio[646]: time="2024-07-31 21:07:32.367364237Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460052367341378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=228ccf60-e803-4121-956a-577072936902 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:07:32 old-k8s-version-239115 crio[646]: time="2024-07-31 21:07:32.367959797Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9266c75b-4cee-41bc-9cd4-27101d0806d4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:07:32 old-k8s-version-239115 crio[646]: time="2024-07-31 21:07:32.368007597Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9266c75b-4cee-41bc-9cd4-27101d0806d4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:07:32 old-k8s-version-239115 crio[646]: time="2024-07-31 21:07:32.368037770Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9266c75b-4cee-41bc-9cd4-27101d0806d4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:07:32 old-k8s-version-239115 crio[646]: time="2024-07-31 21:07:32.401159106Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=63ef34bd-34fd-4350-9f27-98f196fc155b name=/runtime.v1.RuntimeService/Version
	Jul 31 21:07:32 old-k8s-version-239115 crio[646]: time="2024-07-31 21:07:32.401294825Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=63ef34bd-34fd-4350-9f27-98f196fc155b name=/runtime.v1.RuntimeService/Version
	Jul 31 21:07:32 old-k8s-version-239115 crio[646]: time="2024-07-31 21:07:32.402671378Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8de64d5c-4eda-4b09-a0be-e451f3a8aaf5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:07:32 old-k8s-version-239115 crio[646]: time="2024-07-31 21:07:32.403050300Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460052403031104,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8de64d5c-4eda-4b09-a0be-e451f3a8aaf5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:07:32 old-k8s-version-239115 crio[646]: time="2024-07-31 21:07:32.403587781Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=48a3b276-f5b4-42c7-b97d-87fd08393af4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:07:32 old-k8s-version-239115 crio[646]: time="2024-07-31 21:07:32.403663559Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=48a3b276-f5b4-42c7-b97d-87fd08393af4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:07:32 old-k8s-version-239115 crio[646]: time="2024-07-31 21:07:32.403695184Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=48a3b276-f5b4-42c7-b97d-87fd08393af4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:07:32 old-k8s-version-239115 crio[646]: time="2024-07-31 21:07:32.439601057Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=270844ca-52df-4b21-8f51-7c40a111075b name=/runtime.v1.RuntimeService/Version
	Jul 31 21:07:32 old-k8s-version-239115 crio[646]: time="2024-07-31 21:07:32.439705801Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=270844ca-52df-4b21-8f51-7c40a111075b name=/runtime.v1.RuntimeService/Version
	Jul 31 21:07:32 old-k8s-version-239115 crio[646]: time="2024-07-31 21:07:32.440916900Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fb0dbb39-9059-41b4-b89e-786a41bc00e7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:07:32 old-k8s-version-239115 crio[646]: time="2024-07-31 21:07:32.441366198Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460052441341575,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fb0dbb39-9059-41b4-b89e-786a41bc00e7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:07:32 old-k8s-version-239115 crio[646]: time="2024-07-31 21:07:32.442167517Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2fbfc41e-70ff-40b4-9fc4-fbbb4457b848 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:07:32 old-k8s-version-239115 crio[646]: time="2024-07-31 21:07:32.442325269Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2fbfc41e-70ff-40b4-9fc4-fbbb4457b848 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:07:32 old-k8s-version-239115 crio[646]: time="2024-07-31 21:07:32.442365629Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2fbfc41e-70ff-40b4-9fc4-fbbb4457b848 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:07:32 old-k8s-version-239115 crio[646]: time="2024-07-31 21:07:32.479768560Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=16d22d9a-66c8-4a1b-a9ee-6a18cb1d9f9d name=/runtime.v1.RuntimeService/Version
	Jul 31 21:07:32 old-k8s-version-239115 crio[646]: time="2024-07-31 21:07:32.479846273Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=16d22d9a-66c8-4a1b-a9ee-6a18cb1d9f9d name=/runtime.v1.RuntimeService/Version
	Jul 31 21:07:32 old-k8s-version-239115 crio[646]: time="2024-07-31 21:07:32.482101266Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a0914f29-11cd-4732-8a8e-a4d6f2dd58a4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:07:32 old-k8s-version-239115 crio[646]: time="2024-07-31 21:07:32.482681106Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460052482645122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a0914f29-11cd-4732-8a8e-a4d6f2dd58a4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:07:32 old-k8s-version-239115 crio[646]: time="2024-07-31 21:07:32.483459406Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a193ecc-fe70-4351-b062-f37b07974f8f name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:07:32 old-k8s-version-239115 crio[646]: time="2024-07-31 21:07:32.483516428Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a193ecc-fe70-4351-b062-f37b07974f8f name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:07:32 old-k8s-version-239115 crio[646]: time="2024-07-31 21:07:32.483548958Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1a193ecc-fe70-4351-b062-f37b07974f8f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul31 20:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.062231] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.050403] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.190389] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.608719] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.611027] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.653908] systemd-fstab-generator[565]: Ignoring "noauto" option for root device
	[  +0.062587] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060554] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.234631] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.143128] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.268421] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +6.725014] systemd-fstab-generator[835]: Ignoring "noauto" option for root device
	[  +0.065215] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.078703] systemd-fstab-generator[960]: Ignoring "noauto" option for root device
	[ +10.116461] kauditd_printk_skb: 46 callbacks suppressed
	[Jul31 21:03] systemd-fstab-generator[5008]: Ignoring "noauto" option for root device
	[Jul31 21:05] systemd-fstab-generator[5292]: Ignoring "noauto" option for root device
	[  +0.069669] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:07:32 up 8 min,  0 users,  load average: 0.03, 0.15, 0.09
	Linux old-k8s-version-239115 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 31 21:07:29 old-k8s-version-239115 kubelet[5470]:         /usr/local/go/src/net/dial.go:580 +0x5e5
	Jul 31 21:07:29 old-k8s-version-239115 kubelet[5470]: net.(*sysDialer).dialSerial(0xc000ccf800, 0x4f7fe40, 0xc000bf96e0, 0xc000ccb6f0, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	Jul 31 21:07:29 old-k8s-version-239115 kubelet[5470]:         /usr/local/go/src/net/dial.go:548 +0x152
	Jul 31 21:07:29 old-k8s-version-239115 kubelet[5470]: net.(*Dialer).DialContext(0xc000a9ae40, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000ce6390, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 31 21:07:29 old-k8s-version-239115 kubelet[5470]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Jul 31 21:07:29 old-k8s-version-239115 kubelet[5470]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000aa4dc0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000ce6390, 0x24, 0x60, 0x7fe6941b0b98, 0x118, ...)
	Jul 31 21:07:29 old-k8s-version-239115 kubelet[5470]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Jul 31 21:07:29 old-k8s-version-239115 kubelet[5470]: net/http.(*Transport).dial(0xc00002e140, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000ce6390, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 31 21:07:29 old-k8s-version-239115 kubelet[5470]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Jul 31 21:07:29 old-k8s-version-239115 kubelet[5470]: net/http.(*Transport).dialConn(0xc00002e140, 0x4f7fe00, 0xc000120018, 0x0, 0xc000bb50e0, 0x5, 0xc000ce6390, 0x24, 0x0, 0xc000bf5c20, ...)
	Jul 31 21:07:29 old-k8s-version-239115 kubelet[5470]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Jul 31 21:07:29 old-k8s-version-239115 kubelet[5470]: net/http.(*Transport).dialConnFor(0xc00002e140, 0xc000b144d0)
	Jul 31 21:07:29 old-k8s-version-239115 kubelet[5470]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Jul 31 21:07:29 old-k8s-version-239115 kubelet[5470]: created by net/http.(*Transport).queueForDial
	Jul 31 21:07:29 old-k8s-version-239115 kubelet[5470]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Jul 31 21:07:29 old-k8s-version-239115 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 31 21:07:29 old-k8s-version-239115 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 31 21:07:30 old-k8s-version-239115 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jul 31 21:07:30 old-k8s-version-239115 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 31 21:07:30 old-k8s-version-239115 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 31 21:07:30 old-k8s-version-239115 kubelet[5536]: I0731 21:07:30.488895    5536 server.go:416] Version: v1.20.0
	Jul 31 21:07:30 old-k8s-version-239115 kubelet[5536]: I0731 21:07:30.489259    5536 server.go:837] Client rotation is on, will bootstrap in background
	Jul 31 21:07:30 old-k8s-version-239115 kubelet[5536]: I0731 21:07:30.491267    5536 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 31 21:07:30 old-k8s-version-239115 kubelet[5536]: I0731 21:07:30.492638    5536 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Jul 31 21:07:30 old-k8s-version-239115 kubelet[5536]: W0731 21:07:30.492838    5536 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-239115 -n old-k8s-version-239115
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-239115 -n old-k8s-version-239115: exit status 2 (228.853831ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-239115" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (740.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-125614 -n default-k8s-diff-port-125614
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-31 21:12:38.64150923 +0000 UTC m=+6330.891867053
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-125614 -n default-k8s-diff-port-125614
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-125614 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-125614 logs -n 25: (2.169100385s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-341849 sudo                                  | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC |                     |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-341849 sudo                                  | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-831240                                  | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:50 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| ssh     | -p bridge-341849 sudo                                  | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-341849 sudo find                             | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-341849 sudo crio                             | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-341849                                       | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-248084 | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | disable-driver-mounts-248084                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:51 UTC |
	|         | default-k8s-diff-port-125614                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-831240            | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC | 31 Jul 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-831240                                  | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-916885             | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC | 31 Jul 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-916885                                   | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-125614  | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC | 31 Jul 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC |                     |
	|         | default-k8s-diff-port-125614                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-239115        | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-831240                 | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-831240                                  | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC | 31 Jul 24 21:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-916885                  | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-916885 --memory=2200                     | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 20:54 UTC | 31 Jul 24 21:04 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-125614       | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:54 UTC | 31 Jul 24 21:03 UTC |
	|         | default-k8s-diff-port-125614                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-239115                              | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:55 UTC | 31 Jul 24 20:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-239115             | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:55 UTC | 31 Jul 24 20:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-239115                              | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 20:55:13
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 20:55:13.835355  188656 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:55:13.835514  188656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:55:13.835525  188656 out.go:304] Setting ErrFile to fd 2...
	I0731 20:55:13.835531  188656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:55:13.835717  188656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 20:55:13.836233  188656 out.go:298] Setting JSON to false
	I0731 20:55:13.837146  188656 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9450,"bootTime":1722449864,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 20:55:13.837206  188656 start.go:139] virtualization: kvm guest
	I0731 20:55:13.839094  188656 out.go:177] * [old-k8s-version-239115] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 20:55:13.840630  188656 notify.go:220] Checking for updates...
	I0731 20:55:13.840638  188656 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 20:55:13.841884  188656 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 20:55:13.843054  188656 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 20:55:13.844295  188656 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 20:55:13.845348  188656 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 20:55:13.846480  188656 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 20:55:13.847974  188656 config.go:182] Loaded profile config "old-k8s-version-239115": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 20:55:13.848349  188656 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:55:13.848390  188656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:55:13.863017  188656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46065
	I0731 20:55:13.863418  188656 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:55:13.863927  188656 main.go:141] libmachine: Using API Version  1
	I0731 20:55:13.863980  188656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:55:13.864357  188656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:55:13.864625  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:55:13.866178  188656 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 20:55:13.867248  188656 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 20:55:13.867523  188656 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:55:13.867552  188656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:55:13.881922  188656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44705
	I0731 20:55:13.882304  188656 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:55:13.882707  188656 main.go:141] libmachine: Using API Version  1
	I0731 20:55:13.882729  188656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:55:13.883037  188656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:55:13.883214  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:55:13.917067  188656 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 20:55:13.918247  188656 start.go:297] selected driver: kvm2
	I0731 20:55:13.918260  188656 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-239115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:55:13.918396  188656 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 20:55:13.919323  188656 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:55:13.919428  188656 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-121704/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 20:55:13.934150  188656 install.go:137] /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0731 20:55:13.934506  188656 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 20:55:13.934569  188656 cni.go:84] Creating CNI manager for ""
	I0731 20:55:13.934583  188656 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:55:13.934630  188656 start.go:340] cluster config:
	{Name:old-k8s-version-239115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239115 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:55:13.934737  188656 iso.go:125] acquiring lock: {Name:mk69fc0fe37180b19ce91307b5e12ab2d0bd69fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:55:13.936401  188656 out.go:177] * Starting "old-k8s-version-239115" primary control-plane node in "old-k8s-version-239115" cluster
	I0731 20:55:13.769565  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:13.937700  188656 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 20:55:13.937735  188656 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0731 20:55:13.937743  188656 cache.go:56] Caching tarball of preloaded images
	I0731 20:55:13.937806  188656 preload.go:172] Found /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 20:55:13.937816  188656 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0731 20:55:13.937907  188656 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/config.json ...
	I0731 20:55:13.938068  188656 start.go:360] acquireMachinesLock for old-k8s-version-239115: {Name:mk0ee20c9dba367bb5e62f6affdfe6f589095d2a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 20:55:19.845616  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:22.917614  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:28.997601  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:32.069596  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:38.149607  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:41.221579  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:47.301587  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:50.373695  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:56.453611  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:59.525649  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:05.605640  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:08.677654  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:14.757599  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:17.829627  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:23.909581  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:26.981613  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:33.061611  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:36.133597  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:42.213638  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:45.285703  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:51.365653  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:54.437615  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:00.517627  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:03.589595  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:09.669666  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:12.741661  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:18.821643  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:21.893594  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:27.973636  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:31.045651  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:37.125619  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:40.197656  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:46.277679  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:49.349535  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:55.429634  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:58.501611  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:58:04.581620  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:58:07.653642  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:58:13.733571  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:58:16.805674  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:58:19.809697  188133 start.go:364] duration metric: took 4m15.439364065s to acquireMachinesLock for "no-preload-916885"
	I0731 20:58:19.809748  188133 start.go:96] Skipping create...Using existing machine configuration
	I0731 20:58:19.809756  188133 fix.go:54] fixHost starting: 
	I0731 20:58:19.810113  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:58:19.810149  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:58:19.825131  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40671
	I0731 20:58:19.825615  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:58:19.826110  188133 main.go:141] libmachine: Using API Version  1
	I0731 20:58:19.826132  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:58:19.826439  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:58:19.826616  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:19.826840  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetState
	I0731 20:58:19.828267  188133 fix.go:112] recreateIfNeeded on no-preload-916885: state=Stopped err=<nil>
	I0731 20:58:19.828294  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	W0731 20:58:19.828471  188133 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 20:58:19.829957  188133 out.go:177] * Restarting existing kvm2 VM for "no-preload-916885" ...
	I0731 20:58:19.807506  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:58:19.807579  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetMachineName
	I0731 20:58:19.807919  187862 buildroot.go:166] provisioning hostname "embed-certs-831240"
	I0731 20:58:19.807946  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetMachineName
	I0731 20:58:19.808126  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:58:19.809580  187862 machine.go:97] duration metric: took 4m37.431426503s to provisionDockerMachine
	I0731 20:58:19.809625  187862 fix.go:56] duration metric: took 4m37.4520345s for fixHost
	I0731 20:58:19.809631  187862 start.go:83] releasing machines lock for "embed-certs-831240", held for 4m37.452053341s
	W0731 20:58:19.809664  187862 start.go:714] error starting host: provision: host is not running
	W0731 20:58:19.809893  187862 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0731 20:58:19.809916  187862 start.go:729] Will try again in 5 seconds ...
	I0731 20:58:19.831221  188133 main.go:141] libmachine: (no-preload-916885) Calling .Start
	I0731 20:58:19.831409  188133 main.go:141] libmachine: (no-preload-916885) Ensuring networks are active...
	I0731 20:58:19.832210  188133 main.go:141] libmachine: (no-preload-916885) Ensuring network default is active
	I0731 20:58:19.832536  188133 main.go:141] libmachine: (no-preload-916885) Ensuring network mk-no-preload-916885 is active
	I0731 20:58:19.832885  188133 main.go:141] libmachine: (no-preload-916885) Getting domain xml...
	I0731 20:58:19.833563  188133 main.go:141] libmachine: (no-preload-916885) Creating domain...
	I0731 20:58:21.031310  188133 main.go:141] libmachine: (no-preload-916885) Waiting to get IP...
	I0731 20:58:21.032067  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:21.032519  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:21.032626  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:21.032509  189287 retry.go:31] will retry after 207.547113ms: waiting for machine to come up
	I0731 20:58:21.242229  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:21.242716  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:21.242797  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:21.242683  189287 retry.go:31] will retry after 307.483232ms: waiting for machine to come up
	I0731 20:58:21.552437  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:21.552954  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:21.552977  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:21.552911  189287 retry.go:31] will retry after 441.063904ms: waiting for machine to come up
	I0731 20:58:21.995514  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:21.995860  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:21.995903  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:21.995813  189287 retry.go:31] will retry after 596.915537ms: waiting for machine to come up
	I0731 20:58:22.594563  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:22.595037  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:22.595079  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:22.594988  189287 retry.go:31] will retry after 471.207023ms: waiting for machine to come up
	I0731 20:58:23.067499  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:23.067926  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:23.067950  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:23.067899  189287 retry.go:31] will retry after 756.851428ms: waiting for machine to come up
	I0731 20:58:23.826869  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:23.827277  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:23.827305  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:23.827232  189287 retry.go:31] will retry after 981.303239ms: waiting for machine to come up
	I0731 20:58:24.810830  187862 start.go:360] acquireMachinesLock for embed-certs-831240: {Name:mk0ee20c9dba367bb5e62f6affdfe6f589095d2a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 20:58:24.810239  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:24.810615  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:24.810651  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:24.810584  189287 retry.go:31] will retry after 1.18169902s: waiting for machine to come up
	I0731 20:58:25.994320  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:25.994700  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:25.994728  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:25.994635  189287 retry.go:31] will retry after 1.781207961s: waiting for machine to come up
	I0731 20:58:27.778381  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:27.778764  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:27.778805  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:27.778734  189287 retry.go:31] will retry after 1.885603462s: waiting for machine to come up
	I0731 20:58:29.665633  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:29.666049  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:29.666070  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:29.666026  189287 retry.go:31] will retry after 2.664379174s: waiting for machine to come up
	I0731 20:58:32.333226  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:32.333615  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:32.333644  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:32.333594  189287 retry.go:31] will retry after 2.932420774s: waiting for machine to come up
	I0731 20:58:35.267165  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:35.267527  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:35.267558  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:35.267496  189287 retry.go:31] will retry after 4.378841892s: waiting for machine to come up
	I0731 20:58:41.010483  188266 start.go:364] duration metric: took 4m25.11688001s to acquireMachinesLock for "default-k8s-diff-port-125614"
	I0731 20:58:41.010557  188266 start.go:96] Skipping create...Using existing machine configuration
	I0731 20:58:41.010566  188266 fix.go:54] fixHost starting: 
	I0731 20:58:41.010992  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:58:41.011033  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:58:41.030450  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41629
	I0731 20:58:41.030910  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:58:41.031360  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:58:41.031382  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:58:41.031703  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:58:41.031859  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:58:41.032020  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetState
	I0731 20:58:41.033653  188266 fix.go:112] recreateIfNeeded on default-k8s-diff-port-125614: state=Stopped err=<nil>
	I0731 20:58:41.033695  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	W0731 20:58:41.033872  188266 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 20:58:41.035898  188266 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-125614" ...
	I0731 20:58:39.650969  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.651458  188133 main.go:141] libmachine: (no-preload-916885) Found IP for machine: 192.168.72.239
	I0731 20:58:39.651475  188133 main.go:141] libmachine: (no-preload-916885) Reserving static IP address...
	I0731 20:58:39.651516  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has current primary IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.651957  188133 main.go:141] libmachine: (no-preload-916885) Reserved static IP address: 192.168.72.239
	I0731 20:58:39.651995  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "no-preload-916885", mac: "52:54:00:46:b1:6a", ip: "192.168.72.239"} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:39.652023  188133 main.go:141] libmachine: (no-preload-916885) Waiting for SSH to be available...
	I0731 20:58:39.652054  188133 main.go:141] libmachine: (no-preload-916885) DBG | skip adding static IP to network mk-no-preload-916885 - found existing host DHCP lease matching {name: "no-preload-916885", mac: "52:54:00:46:b1:6a", ip: "192.168.72.239"}
	I0731 20:58:39.652073  188133 main.go:141] libmachine: (no-preload-916885) DBG | Getting to WaitForSSH function...
	I0731 20:58:39.654095  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.654450  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:39.654479  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.654636  188133 main.go:141] libmachine: (no-preload-916885) DBG | Using SSH client type: external
	I0731 20:58:39.654659  188133 main.go:141] libmachine: (no-preload-916885) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa (-rw-------)
	I0731 20:58:39.654714  188133 main.go:141] libmachine: (no-preload-916885) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.239 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:58:39.654729  188133 main.go:141] libmachine: (no-preload-916885) DBG | About to run SSH command:
	I0731 20:58:39.654768  188133 main.go:141] libmachine: (no-preload-916885) DBG | exit 0
	I0731 20:58:39.781409  188133 main.go:141] libmachine: (no-preload-916885) DBG | SSH cmd err, output: <nil>: 
	I0731 20:58:39.781741  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetConfigRaw
	I0731 20:58:39.782349  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetIP
	I0731 20:58:39.784813  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.785234  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:39.785266  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.785643  188133 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/config.json ...
	I0731 20:58:39.785859  188133 machine.go:94] provisionDockerMachine start ...
	I0731 20:58:39.785879  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:39.786095  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:39.788573  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.788840  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:39.788868  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.789025  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:39.789203  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:39.789374  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:39.789495  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:39.789661  188133 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:39.789927  188133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.239 22 <nil> <nil>}
	I0731 20:58:39.789941  188133 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 20:58:39.901661  188133 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 20:58:39.901687  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetMachineName
	I0731 20:58:39.901920  188133 buildroot.go:166] provisioning hostname "no-preload-916885"
	I0731 20:58:39.901953  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetMachineName
	I0731 20:58:39.902142  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:39.904763  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.905159  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:39.905186  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.905347  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:39.905534  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:39.905698  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:39.905822  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:39.905977  188133 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:39.906137  188133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.239 22 <nil> <nil>}
	I0731 20:58:39.906155  188133 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-916885 && echo "no-preload-916885" | sudo tee /etc/hostname
	I0731 20:58:40.030955  188133 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-916885
	
	I0731 20:58:40.030979  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.033905  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.034254  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.034276  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.034487  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:40.034693  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.034868  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.035014  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:40.035197  188133 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:40.035373  188133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.239 22 <nil> <nil>}
	I0731 20:58:40.035392  188133 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-916885' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-916885/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-916885' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:58:40.154331  188133 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:58:40.154381  188133 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 20:58:40.154436  188133 buildroot.go:174] setting up certificates
	I0731 20:58:40.154452  188133 provision.go:84] configureAuth start
	I0731 20:58:40.154474  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetMachineName
	I0731 20:58:40.154813  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetIP
	I0731 20:58:40.157702  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.158053  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.158075  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.158218  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.160715  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.161030  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.161048  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.161186  188133 provision.go:143] copyHostCerts
	I0731 20:58:40.161258  188133 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 20:58:40.161267  188133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 20:58:40.161372  188133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 20:58:40.161477  188133 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 20:58:40.161487  188133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 20:58:40.161520  188133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 20:58:40.161590  188133 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 20:58:40.161606  188133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 20:58:40.161639  188133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 20:58:40.161700  188133 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.no-preload-916885 san=[127.0.0.1 192.168.72.239 localhost minikube no-preload-916885]
	I0731 20:58:40.341529  188133 provision.go:177] copyRemoteCerts
	I0731 20:58:40.341586  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:58:40.341612  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.344557  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.344851  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.344871  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.345080  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:40.345266  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.345432  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:40.345677  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 20:58:40.431395  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:58:40.455012  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 20:58:40.477721  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 20:58:40.500174  188133 provision.go:87] duration metric: took 345.705192ms to configureAuth
	I0731 20:58:40.500203  188133 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:58:40.500377  188133 config.go:182] Loaded profile config "no-preload-916885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 20:58:40.500462  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.503077  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.503438  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.503467  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.503586  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:40.503780  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.503947  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.504065  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:40.504245  188133 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:40.504467  188133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.239 22 <nil> <nil>}
	I0731 20:58:40.504489  188133 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:58:40.765409  188133 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:58:40.765448  188133 machine.go:97] duration metric: took 979.574417ms to provisionDockerMachine
	I0731 20:58:40.765460  188133 start.go:293] postStartSetup for "no-preload-916885" (driver="kvm2")
	I0731 20:58:40.765474  188133 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:58:40.765525  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:40.765895  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:58:40.765928  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.768314  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.768610  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.768657  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.768760  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:40.768926  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.769089  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:40.769199  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 20:58:40.855821  188133 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:58:40.860032  188133 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:58:40.860071  188133 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 20:58:40.860148  188133 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 20:58:40.860251  188133 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 20:58:40.860367  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:58:40.869291  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:58:40.892945  188133 start.go:296] duration metric: took 127.469545ms for postStartSetup
	I0731 20:58:40.892991  188133 fix.go:56] duration metric: took 21.083232755s for fixHost
	I0731 20:58:40.893019  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.895784  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.896166  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.896197  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.896316  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:40.896501  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.896654  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.896777  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:40.896964  188133 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:40.897133  188133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.239 22 <nil> <nil>}
	I0731 20:58:40.897143  188133 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:58:41.010330  188133 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722459520.969906971
	
	I0731 20:58:41.010352  188133 fix.go:216] guest clock: 1722459520.969906971
	I0731 20:58:41.010360  188133 fix.go:229] Guest: 2024-07-31 20:58:40.969906971 +0000 UTC Remote: 2024-07-31 20:58:40.892995844 +0000 UTC m=+276.656012666 (delta=76.911127ms)
	I0731 20:58:41.010390  188133 fix.go:200] guest clock delta is within tolerance: 76.911127ms
	I0731 20:58:41.010396  188133 start.go:83] releasing machines lock for "no-preload-916885", held for 21.200662427s
	I0731 20:58:41.010429  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:41.010733  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetIP
	I0731 20:58:41.013519  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.013841  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:41.013867  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.014034  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:41.014637  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:41.014829  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:41.014914  188133 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:58:41.014974  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:41.015051  188133 ssh_runner.go:195] Run: cat /version.json
	I0731 20:58:41.015074  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:41.017813  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.017837  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.018170  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:41.018205  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:41.018225  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.018239  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.018482  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:41.018493  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:41.018678  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:41.018694  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:41.018862  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:41.018885  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:41.018965  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 20:58:41.019040  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 20:58:41.107999  188133 ssh_runner.go:195] Run: systemctl --version
	I0731 20:58:41.133039  188133 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:58:41.279485  188133 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:58:41.285765  188133 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:58:41.285838  188133 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:58:41.302175  188133 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:58:41.302203  188133 start.go:495] detecting cgroup driver to use...
	I0731 20:58:41.302280  188133 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:58:41.319896  188133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:58:41.334618  188133 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:58:41.334689  188133 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:58:41.348292  188133 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:58:41.363968  188133 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:58:41.472992  188133 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:58:41.605581  188133 docker.go:233] disabling docker service ...
	I0731 20:58:41.605669  188133 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:58:41.620414  188133 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:58:41.632951  188133 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:58:41.783942  188133 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:58:41.912311  188133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:58:41.931076  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:58:41.954672  188133 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0731 20:58:41.954752  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:41.967478  188133 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:58:41.967567  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:41.978990  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:41.991689  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:42.003168  188133 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:58:42.019114  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:42.034607  188133 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:42.057543  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:42.070420  188133 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:58:42.081173  188133 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:58:42.081245  188133 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:58:42.095455  188133 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:58:42.106943  188133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:58:42.221724  188133 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:58:42.375966  188133 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:58:42.376051  188133 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:58:42.381473  188133 start.go:563] Will wait 60s for crictl version
	I0731 20:58:42.381548  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:42.385364  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:58:42.426783  188133 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:58:42.426872  188133 ssh_runner.go:195] Run: crio --version
	I0731 20:58:42.459096  188133 ssh_runner.go:195] Run: crio --version
	I0731 20:58:42.490043  188133 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0731 20:58:42.491578  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetIP
	I0731 20:58:42.494915  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:42.495289  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:42.495310  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:42.495610  188133 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0731 20:58:42.500266  188133 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:58:42.515164  188133 kubeadm.go:883] updating cluster {Name:no-preload-916885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-916885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.239 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:58:42.515295  188133 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 20:58:42.515332  188133 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:58:42.551930  188133 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0731 20:58:42.551961  188133 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 20:58:42.552025  188133 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:58:42.552047  188133 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0731 20:58:42.552067  188133 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 20:58:42.552087  188133 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 20:58:42.552071  188133 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 20:58:42.552028  188133 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 20:58:42.552129  188133 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 20:58:42.552035  188133 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0731 20:58:42.554026  188133 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 20:58:42.554044  188133 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 20:58:42.554103  188133 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0731 20:58:42.554112  188133 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0731 20:58:42.554123  188133 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:58:42.554030  188133 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 20:58:42.554032  188133 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 20:58:42.554027  188133 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 20:58:42.721659  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0731 20:58:42.743910  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0731 20:58:42.750941  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0731 20:58:42.772074  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 20:58:42.781921  188133 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0731 20:58:42.781964  188133 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 20:58:42.782014  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:42.793926  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 20:58:42.813112  188133 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0731 20:58:42.813154  188133 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0731 20:58:42.813202  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:42.916544  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 20:58:42.937647  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 20:58:42.948145  188133 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0731 20:58:42.948194  188133 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 20:58:42.948208  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0731 20:58:42.948237  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:42.948268  188133 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0731 20:58:42.948300  188133 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 20:58:42.948338  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0731 20:58:42.948341  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:43.006187  188133 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0731 20:58:43.006238  188133 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 20:58:43.006295  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:43.045484  188133 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0731 20:58:43.045541  188133 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 20:58:43.045585  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:43.045589  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 20:58:43.045643  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0731 20:58:43.045710  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0731 20:58:43.045730  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0731 20:58:43.045741  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 20:58:43.045780  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 20:58:43.045823  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0731 20:58:43.122382  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0731 20:58:43.122429  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0731 20:58:43.122449  188133 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0731 20:58:43.122489  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 20:58:43.122497  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0731 20:58:43.122513  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0731 20:58:43.122517  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0731 20:58:43.122588  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 20:58:43.122637  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 20:58:43.122643  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0731 20:58:43.122731  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 20:58:43.522969  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:58:41.037393  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Start
	I0731 20:58:41.037575  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Ensuring networks are active...
	I0731 20:58:41.038366  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Ensuring network default is active
	I0731 20:58:41.038703  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Ensuring network mk-default-k8s-diff-port-125614 is active
	I0731 20:58:41.039402  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Getting domain xml...
	I0731 20:58:41.040218  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Creating domain...
	I0731 20:58:42.319123  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting to get IP...
	I0731 20:58:42.320314  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.320801  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.320908  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:42.320797  189429 retry.go:31] will retry after 274.801111ms: waiting for machine to come up
	I0731 20:58:42.597444  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.597866  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.597914  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:42.597842  189429 retry.go:31] will retry after 382.328248ms: waiting for machine to come up
	I0731 20:58:42.981533  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.982018  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.982051  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:42.981955  189429 retry.go:31] will retry after 426.247953ms: waiting for machine to come up
	I0731 20:58:43.409523  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:43.409839  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:43.409867  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:43.409795  189429 retry.go:31] will retry after 483.501118ms: waiting for machine to come up
	I0731 20:58:43.894451  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:43.894844  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:43.894874  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:43.894779  189429 retry.go:31] will retry after 759.968593ms: waiting for machine to come up
	I0731 20:58:44.656097  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:44.656551  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:44.656580  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:44.656503  189429 retry.go:31] will retry after 766.563008ms: waiting for machine to come up
	I0731 20:58:45.424382  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:45.424793  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:45.424831  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:45.424744  189429 retry.go:31] will retry after 1.172047019s: waiting for machine to come up
	I0731 20:58:45.107333  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.984807614s)
	I0731 20:58:45.107368  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0731 20:58:45.107393  188133 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0731 20:58:45.107452  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0731 20:58:45.107471  188133 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0: (1.98485492s)
	I0731 20:58:45.107523  188133 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.985012474s)
	I0731 20:58:45.107534  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0731 20:58:45.107560  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0731 20:58:45.107563  188133 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.984910291s)
	I0731 20:58:45.107585  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0731 20:58:45.107609  188133 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.984862504s)
	I0731 20:58:45.107619  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 20:58:45.107626  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0731 20:58:45.107668  188133 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.584674739s)
	I0731 20:58:45.107701  188133 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0731 20:58:45.107729  188133 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:58:45.107761  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:48.706832  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.599347822s)
	I0731 20:58:48.706872  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0731 20:58:48.706886  188133 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (3.599247467s)
	I0731 20:58:48.706923  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0731 20:58:48.706898  188133 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 20:58:48.706925  188133 ssh_runner.go:235] Completed: which crictl: (3.599146318s)
	I0731 20:58:48.706979  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:58:48.706980  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 20:58:48.747292  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 20:58:48.747415  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0731 20:58:46.598636  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:46.599086  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:46.599117  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:46.599033  189429 retry.go:31] will retry after 1.204122239s: waiting for machine to come up
	I0731 20:58:47.805441  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:47.805922  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:47.805953  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:47.805864  189429 retry.go:31] will retry after 1.466632244s: waiting for machine to come up
	I0731 20:58:49.274527  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:49.275004  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:49.275030  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:49.274961  189429 retry.go:31] will retry after 2.04848438s: waiting for machine to come up
	I0731 20:58:50.902082  188133 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.154633427s)
	I0731 20:58:50.902138  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0731 20:58:50.902203  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.195118092s)
	I0731 20:58:50.902226  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0731 20:58:50.902259  188133 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 20:58:50.902320  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 20:58:52.863335  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.960989386s)
	I0731 20:58:52.863370  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0731 20:58:52.863394  188133 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 20:58:52.863434  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 20:58:51.324633  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:51.325056  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:51.325080  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:51.324983  189429 retry.go:31] will retry after 1.991151757s: waiting for machine to come up
	I0731 20:58:53.318784  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:53.319133  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:53.319164  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:53.319077  189429 retry.go:31] will retry after 2.631932264s: waiting for machine to come up
	I0731 20:58:54.629811  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.766355185s)
	I0731 20:58:54.629840  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0731 20:58:54.629882  188133 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 20:58:54.629954  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 20:58:55.983610  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.353622135s)
	I0731 20:58:55.983655  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0731 20:58:55.983692  188133 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0731 20:58:55.983764  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0731 20:58:56.828512  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0731 20:58:56.828560  188133 cache_images.go:123] Successfully loaded all cached images
	I0731 20:58:56.828568  188133 cache_images.go:92] duration metric: took 14.276593942s to LoadCachedImages
	I0731 20:58:56.828583  188133 kubeadm.go:934] updating node { 192.168.72.239 8443 v1.31.0-beta.0 crio true true} ...
	I0731 20:58:56.828722  188133 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-916885 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.239
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-916885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:58:56.828806  188133 ssh_runner.go:195] Run: crio config
	I0731 20:58:56.877187  188133 cni.go:84] Creating CNI manager for ""
	I0731 20:58:56.877222  188133 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:58:56.877245  188133 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:58:56.877269  188133 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.239 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-916885 NodeName:no-preload-916885 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.239"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.239 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 20:58:56.877442  188133 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.239
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-916885"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.239
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.239"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:58:56.877526  188133 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0731 20:58:56.887721  188133 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:58:56.887796  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 20:58:56.896845  188133 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0731 20:58:56.912886  188133 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0731 20:58:56.928914  188133 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0731 20:58:56.945604  188133 ssh_runner.go:195] Run: grep 192.168.72.239	control-plane.minikube.internal$ /etc/hosts
	I0731 20:58:56.949538  188133 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.239	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:58:56.961490  188133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:58:57.075114  188133 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:58:57.091701  188133 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885 for IP: 192.168.72.239
	I0731 20:58:57.091724  188133 certs.go:194] generating shared ca certs ...
	I0731 20:58:57.091743  188133 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:58:57.091909  188133 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 20:58:57.091959  188133 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 20:58:57.091971  188133 certs.go:256] generating profile certs ...
	I0731 20:58:57.092062  188133 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/client.key
	I0731 20:58:57.092141  188133 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/apiserver.key.cc7e9c96
	I0731 20:58:57.092193  188133 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/proxy-client.key
	I0731 20:58:57.092330  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 20:58:57.092405  188133 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 20:58:57.092423  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 20:58:57.092458  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:58:57.092489  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:58:57.092520  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 20:58:57.092586  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:58:57.093296  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:58:57.139431  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:58:57.169132  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:58:57.196541  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 20:58:57.232826  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 20:58:57.260875  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 20:58:57.290195  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:58:57.316645  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 20:58:57.339741  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:58:57.362406  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 20:58:57.385009  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 20:58:57.407540  188133 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:58:57.423697  188133 ssh_runner.go:195] Run: openssl version
	I0731 20:58:57.429741  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:58:57.440545  188133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:58:57.444984  188133 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:58:57.445035  188133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:58:57.450651  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:58:57.460547  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 20:58:57.470575  188133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 20:58:57.474939  188133 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 20:58:57.474988  188133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 20:58:57.480481  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 20:58:57.490404  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 20:58:57.500433  188133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 20:58:57.504785  188133 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 20:58:57.504835  188133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 20:58:57.510165  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:58:57.520019  188133 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:58:57.524596  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 20:58:57.530667  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 20:58:57.536315  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 20:58:57.542049  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 20:58:57.547594  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 20:58:57.553084  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 20:58:57.558419  188133 kubeadm.go:392] StartCluster: {Name:no-preload-916885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-916885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.239 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:58:57.558501  188133 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:58:57.558537  188133 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:58:57.600004  188133 cri.go:89] found id: ""
	I0731 20:58:57.600087  188133 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 20:58:57.609911  188133 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 20:58:57.609933  188133 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 20:58:57.609975  188133 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 20:58:57.619498  188133 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:58:57.621885  188133 kubeconfig.go:125] found "no-preload-916885" server: "https://192.168.72.239:8443"
	I0731 20:58:57.624838  188133 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 20:58:57.633984  188133 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.239
	I0731 20:58:57.634025  188133 kubeadm.go:1160] stopping kube-system containers ...
	I0731 20:58:57.634037  188133 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 20:58:57.634080  188133 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:58:57.672988  188133 cri.go:89] found id: ""
	I0731 20:58:57.673053  188133 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 20:58:57.689149  188133 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:58:57.698520  188133 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:58:57.698541  188133 kubeadm.go:157] found existing configuration files:
	
	I0731 20:58:57.698595  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 20:58:57.707106  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:58:57.707163  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:58:57.715878  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 20:58:57.724169  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:58:57.724219  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:58:57.732890  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 20:58:57.741121  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:58:57.741174  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:58:57.749776  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 20:58:57.758063  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:58:57.758114  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:58:57.766815  188133 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 20:58:57.775595  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:58:57.883689  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:58:58.740684  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:58:58.926231  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:58:58.987089  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:58:59.049782  188133 api_server.go:52] waiting for apiserver process to appear ...
	I0731 20:58:59.049862  188133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:00.418227  188656 start.go:364] duration metric: took 3m46.480116699s to acquireMachinesLock for "old-k8s-version-239115"
	I0731 20:59:00.418294  188656 start.go:96] Skipping create...Using existing machine configuration
	I0731 20:59:00.418302  188656 fix.go:54] fixHost starting: 
	I0731 20:59:00.418738  188656 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:00.418773  188656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:00.438533  188656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37127
	I0731 20:59:00.438963  188656 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:00.439499  188656 main.go:141] libmachine: Using API Version  1
	I0731 20:59:00.439524  188656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:00.439930  188656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:00.441449  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:00.441651  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetState
	I0731 20:59:00.443465  188656 fix.go:112] recreateIfNeeded on old-k8s-version-239115: state=Stopped err=<nil>
	I0731 20:59:00.443505  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	W0731 20:59:00.443679  188656 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 20:59:00.445840  188656 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-239115" ...
	I0731 20:58:55.953940  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:55.954393  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:55.954422  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:55.954356  189429 retry.go:31] will retry after 3.068212527s: waiting for machine to come up
	I0731 20:58:59.025966  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.026388  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has current primary IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.026406  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Found IP for machine: 192.168.50.221
	I0731 20:58:59.026417  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Reserving static IP address...
	I0731 20:58:59.026867  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Reserved static IP address: 192.168.50.221
	I0731 20:58:59.026918  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-125614", mac: "52:54:00:c8:c7:f0", ip: "192.168.50.221"} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.026933  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for SSH to be available...
	I0731 20:58:59.026954  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | skip adding static IP to network mk-default-k8s-diff-port-125614 - found existing host DHCP lease matching {name: "default-k8s-diff-port-125614", mac: "52:54:00:c8:c7:f0", ip: "192.168.50.221"}
	I0731 20:58:59.026972  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Getting to WaitForSSH function...
	I0731 20:58:59.029330  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.029685  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.029720  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.029820  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Using SSH client type: external
	I0731 20:58:59.029846  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa (-rw-------)
	I0731 20:58:59.029877  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.221 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:58:59.029894  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | About to run SSH command:
	I0731 20:58:59.029906  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | exit 0
	I0731 20:58:59.161209  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | SSH cmd err, output: <nil>: 
	I0731 20:58:59.161713  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetConfigRaw
	I0731 20:58:59.162331  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetIP
	I0731 20:58:59.164645  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.164953  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.164986  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.165269  188266 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/config.json ...
	I0731 20:58:59.165479  188266 machine.go:94] provisionDockerMachine start ...
	I0731 20:58:59.165503  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:58:59.165692  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.167796  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.168065  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.168110  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.168247  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:58:59.168408  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.168626  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.168763  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:58:59.168901  188266 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:59.169103  188266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0731 20:58:59.169115  188266 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 20:58:59.281875  188266 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 20:58:59.281901  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetMachineName
	I0731 20:58:59.282185  188266 buildroot.go:166] provisioning hostname "default-k8s-diff-port-125614"
	I0731 20:58:59.282218  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetMachineName
	I0731 20:58:59.282460  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.284970  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.285461  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.285498  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.285612  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:58:59.285814  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.286004  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.286139  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:58:59.286278  188266 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:59.286445  188266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0731 20:58:59.286460  188266 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-125614 && echo "default-k8s-diff-port-125614" | sudo tee /etc/hostname
	I0731 20:58:59.411873  188266 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-125614
	
	I0731 20:58:59.411904  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.414733  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.415065  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.415099  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.415274  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:58:59.415463  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.415604  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.415751  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:58:59.415898  188266 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:59.416074  188266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0731 20:58:59.416090  188266 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-125614' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-125614/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-125614' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:58:59.539168  188266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:58:59.539210  188266 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 20:58:59.539247  188266 buildroot.go:174] setting up certificates
	I0731 20:58:59.539256  188266 provision.go:84] configureAuth start
	I0731 20:58:59.539267  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetMachineName
	I0731 20:58:59.539595  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetIP
	I0731 20:58:59.542447  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.542887  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.542916  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.543103  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.545597  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.545972  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.545992  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.546128  188266 provision.go:143] copyHostCerts
	I0731 20:58:59.546195  188266 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 20:58:59.546206  188266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 20:58:59.546265  188266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 20:58:59.546366  188266 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 20:58:59.546377  188266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 20:58:59.546407  188266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 20:58:59.546488  188266 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 20:58:59.546492  188266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 20:58:59.546517  188266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 20:58:59.546565  188266 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-125614 san=[127.0.0.1 192.168.50.221 default-k8s-diff-port-125614 localhost minikube]
	I0731 20:58:59.690753  188266 provision.go:177] copyRemoteCerts
	I0731 20:58:59.690811  188266 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:58:59.690839  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.693800  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.694141  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.694175  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.694383  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:58:59.694583  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.694748  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:58:59.694884  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:58:59.783710  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:58:59.814512  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0731 20:58:59.843492  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 20:58:59.867793  188266 provision.go:87] duration metric: took 328.521723ms to configureAuth
	I0731 20:58:59.867840  188266 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:58:59.868013  188266 config.go:182] Loaded profile config "default-k8s-diff-port-125614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:58:59.868089  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.871214  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.871655  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.871684  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.871875  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:58:59.872127  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.872309  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.872503  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:58:59.872687  188266 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:59.872909  188266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0731 20:58:59.872935  188266 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:59:00.165458  188266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:59:00.165492  188266 machine.go:97] duration metric: took 999.996831ms to provisionDockerMachine
	I0731 20:59:00.165509  188266 start.go:293] postStartSetup for "default-k8s-diff-port-125614" (driver="kvm2")
	I0731 20:59:00.165527  188266 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:59:00.165549  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:00.165936  188266 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:59:00.165973  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:00.168477  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.168837  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:00.168864  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.168991  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:00.169203  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:00.169387  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:00.169575  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:00.262132  188266 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:59:00.266596  188266 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:59:00.266621  188266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 20:59:00.266695  188266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 20:59:00.266789  188266 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 20:59:00.266909  188266 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:59:00.276407  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:00.300017  188266 start.go:296] duration metric: took 134.490488ms for postStartSetup
	I0731 20:59:00.300061  188266 fix.go:56] duration metric: took 19.289494966s for fixHost
	I0731 20:59:00.300087  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:00.302714  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.303073  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:00.303106  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.303249  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:00.303448  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:00.303633  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:00.303786  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:00.303978  188266 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:00.304204  188266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0731 20:59:00.304217  188266 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:59:00.418073  188266 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722459540.389901096
	
	I0731 20:59:00.418096  188266 fix.go:216] guest clock: 1722459540.389901096
	I0731 20:59:00.418105  188266 fix.go:229] Guest: 2024-07-31 20:59:00.389901096 +0000 UTC Remote: 2024-07-31 20:59:00.30006642 +0000 UTC m=+284.542031804 (delta=89.834676ms)
	I0731 20:59:00.418130  188266 fix.go:200] guest clock delta is within tolerance: 89.834676ms
	I0731 20:59:00.418138  188266 start.go:83] releasing machines lock for "default-k8s-diff-port-125614", held for 19.407605953s
	I0731 20:59:00.418167  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:00.418669  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetIP
	I0731 20:59:00.421683  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.422050  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:00.422090  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.422234  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:00.422799  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:00.422999  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:00.423061  188266 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:59:00.423119  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:00.423354  188266 ssh_runner.go:195] Run: cat /version.json
	I0731 20:59:00.423378  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:00.426188  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.426362  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.426603  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:00.426631  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.426790  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:00.426882  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:00.426929  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.427019  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:00.427197  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:00.427208  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:00.427363  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:00.427380  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:00.427523  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:00.427668  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:00.511834  188266 ssh_runner.go:195] Run: systemctl --version
	I0731 20:59:00.536649  188266 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:59:00.692463  188266 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:59:00.700344  188266 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:59:00.700413  188266 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:59:00.721837  188266 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:59:00.721863  188266 start.go:495] detecting cgroup driver to use...
	I0731 20:59:00.721940  188266 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:59:00.742477  188266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:59:00.760049  188266 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:59:00.760120  188266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:59:00.777823  188266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:59:00.791680  188266 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:59:00.908094  188266 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:59:01.051284  188266 docker.go:233] disabling docker service ...
	I0731 20:59:01.051379  188266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:59:01.070927  188266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:59:01.083393  188266 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:59:01.223186  188266 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:59:01.355265  188266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:59:01.369810  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:59:01.390523  188266 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 20:59:01.390588  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.401241  188266 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:59:01.401308  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.412049  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.422145  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.432523  188266 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:59:01.442640  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.456933  188266 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.475628  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.486226  188266 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:59:01.496757  188266 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:59:01.496813  188266 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:59:01.510264  188266 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:59:01.520231  188266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:01.636451  188266 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:59:01.784134  188266 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:59:01.784220  188266 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:59:01.788836  188266 start.go:563] Will wait 60s for crictl version
	I0731 20:59:01.788895  188266 ssh_runner.go:195] Run: which crictl
	I0731 20:59:01.793059  188266 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:59:01.840110  188266 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:59:01.840200  188266 ssh_runner.go:195] Run: crio --version
	I0731 20:59:01.868816  188266 ssh_runner.go:195] Run: crio --version
	I0731 20:59:01.908539  188266 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 20:59:00.447208  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .Start
	I0731 20:59:00.447389  188656 main.go:141] libmachine: (old-k8s-version-239115) Ensuring networks are active...
	I0731 20:59:00.448116  188656 main.go:141] libmachine: (old-k8s-version-239115) Ensuring network default is active
	I0731 20:59:00.448589  188656 main.go:141] libmachine: (old-k8s-version-239115) Ensuring network mk-old-k8s-version-239115 is active
	I0731 20:59:00.448892  188656 main.go:141] libmachine: (old-k8s-version-239115) Getting domain xml...
	I0731 20:59:00.450110  188656 main.go:141] libmachine: (old-k8s-version-239115) Creating domain...
	I0731 20:59:01.823554  188656 main.go:141] libmachine: (old-k8s-version-239115) Waiting to get IP...
	I0731 20:59:01.824648  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:01.825111  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:01.825172  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:01.825080  189574 retry.go:31] will retry after 241.700507ms: waiting for machine to come up
	I0731 20:59:02.068913  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:02.069608  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:02.069738  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:02.069663  189574 retry.go:31] will retry after 258.921821ms: waiting for machine to come up
	I0731 20:59:02.330231  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:02.330846  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:02.330877  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:02.330776  189574 retry.go:31] will retry after 460.911793ms: waiting for machine to come up
	I0731 20:59:02.793718  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:02.794177  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:02.794206  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:02.794156  189574 retry.go:31] will retry after 380.241989ms: waiting for machine to come up
	I0731 20:59:03.175918  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:03.176761  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:03.176786  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:03.176670  189574 retry.go:31] will retry after 631.876736ms: waiting for machine to come up
	I0731 20:59:03.810803  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:03.811478  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:03.811503  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:03.811366  189574 retry.go:31] will retry after 583.328017ms: waiting for machine to come up
	I0731 20:58:59.550347  188133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:00.050077  188133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:00.066942  188133 api_server.go:72] duration metric: took 1.017157745s to wait for apiserver process to appear ...
	I0731 20:59:00.066991  188133 api_server.go:88] waiting for apiserver healthz status ...
	I0731 20:59:00.067016  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:00.067685  188133 api_server.go:269] stopped: https://192.168.72.239:8443/healthz: Get "https://192.168.72.239:8443/healthz": dial tcp 192.168.72.239:8443: connect: connection refused
	I0731 20:59:00.567237  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:03.555694  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:03.555739  188133 api_server.go:103] status: https://192.168.72.239:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:03.555756  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:03.606602  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:03.606641  188133 api_server.go:103] status: https://192.168.72.239:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:03.606657  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:03.617900  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:03.617935  188133 api_server.go:103] status: https://192.168.72.239:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:04.067724  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:04.073838  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:04.073875  188133 api_server.go:103] status: https://192.168.72.239:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:04.568116  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:04.575013  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:04.575044  188133 api_server.go:103] status: https://192.168.72.239:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:05.067154  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:05.073314  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 200:
	ok
	I0731 20:59:05.083559  188133 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 20:59:05.083595  188133 api_server.go:131] duration metric: took 5.016595337s to wait for apiserver health ...
	I0731 20:59:05.083606  188133 cni.go:84] Creating CNI manager for ""
	I0731 20:59:05.083614  188133 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:05.085564  188133 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 20:59:01.910091  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetIP
	I0731 20:59:01.913322  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:01.913714  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:01.913747  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:01.914046  188266 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0731 20:59:01.918504  188266 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:01.930599  188266 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-125614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-125614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:59:01.930756  188266 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:59:01.930826  188266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:01.968796  188266 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 20:59:01.968882  188266 ssh_runner.go:195] Run: which lz4
	I0731 20:59:01.974123  188266 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 20:59:01.979542  188266 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 20:59:01.979575  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 20:59:03.529579  188266 crio.go:462] duration metric: took 1.555502358s to copy over tarball
	I0731 20:59:03.529662  188266 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 20:59:04.395886  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:04.396400  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:04.396664  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:04.396347  189574 retry.go:31] will retry after 1.154504022s: waiting for machine to come up
	I0731 20:59:05.552240  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:05.552879  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:05.552901  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:05.552831  189574 retry.go:31] will retry after 1.037365333s: waiting for machine to come up
	I0731 20:59:06.591875  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:06.592416  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:06.592450  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:06.592329  189574 retry.go:31] will retry after 1.249444079s: waiting for machine to come up
	I0731 20:59:07.843058  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:07.843436  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:07.843463  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:07.843370  189574 retry.go:31] will retry after 1.700521776s: waiting for machine to come up
	I0731 20:59:05.087080  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 20:59:05.105303  188133 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 20:59:05.125019  188133 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 20:59:05.136768  188133 system_pods.go:59] 8 kube-system pods found
	I0731 20:59:05.136823  188133 system_pods.go:61] "coredns-5cfdc65f69-c9gcf" [3b9458d3-81d0-4138-8a6a-81f087c3ed02] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 20:59:05.136836  188133 system_pods.go:61] "etcd-no-preload-916885" [aa31006d-8e74-48c2-9b5d-5604b3a1c283] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 20:59:05.136847  188133 system_pods.go:61] "kube-apiserver-no-preload-916885" [64549ba0-8e30-41d3-82eb-cdb729623a9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 20:59:05.136856  188133 system_pods.go:61] "kube-controller-manager-no-preload-916885" [2620c741-c27a-4df5-8555-58767d43c675] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 20:59:05.136866  188133 system_pods.go:61] "kube-proxy-99jgm" [0060c1a0-badc-401c-a4dc-5cfaa958654e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 20:59:05.136880  188133 system_pods.go:61] "kube-scheduler-no-preload-916885" [f02a0a1d-5cbb-4ee3-a084-21710667565e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 20:59:05.136894  188133 system_pods.go:61] "metrics-server-78fcd8795b-jrzgg" [acbe48be-32e9-44f8-9bf2-52e0e92a09e0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 20:59:05.136912  188133 system_pods.go:61] "storage-provisioner" [d0f902cd-d1db-4c70-bdad-34bda917cec1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 20:59:05.136926  188133 system_pods.go:74] duration metric: took 11.882384ms to wait for pod list to return data ...
	I0731 20:59:05.136937  188133 node_conditions.go:102] verifying NodePressure condition ...
	I0731 20:59:05.142117  188133 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 20:59:05.142149  188133 node_conditions.go:123] node cpu capacity is 2
	I0731 20:59:05.142165  188133 node_conditions.go:105] duration metric: took 5.221098ms to run NodePressure ...
	I0731 20:59:05.142187  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:05.534597  188133 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 20:59:05.539583  188133 kubeadm.go:739] kubelet initialised
	I0731 20:59:05.539604  188133 kubeadm.go:740] duration metric: took 4.980297ms waiting for restarted kubelet to initialise ...
	I0731 20:59:05.539626  188133 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:59:05.544498  188133 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-c9gcf" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:07.778624  188133 pod_ready.go:102] pod "coredns-5cfdc65f69-c9gcf" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:06.024682  188266 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.494984583s)
	I0731 20:59:06.024718  188266 crio.go:469] duration metric: took 2.495107603s to extract the tarball
	I0731 20:59:06.024729  188266 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 20:59:06.062675  188266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:06.107619  188266 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 20:59:06.107649  188266 cache_images.go:84] Images are preloaded, skipping loading
	I0731 20:59:06.107667  188266 kubeadm.go:934] updating node { 192.168.50.221 8444 v1.30.3 crio true true} ...
	I0731 20:59:06.107792  188266 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-125614 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-125614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:59:06.107863  188266 ssh_runner.go:195] Run: crio config
	I0731 20:59:06.173983  188266 cni.go:84] Creating CNI manager for ""
	I0731 20:59:06.174007  188266 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:06.174019  188266 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:59:06.174040  188266 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.221 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-125614 NodeName:default-k8s-diff-port-125614 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 20:59:06.174168  188266 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.221
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-125614"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.221
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.221"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:59:06.174233  188266 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 20:59:06.185059  188266 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:59:06.185189  188266 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 20:59:06.196571  188266 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0731 20:59:06.218964  188266 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:59:06.239033  188266 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0731 20:59:06.260519  188266 ssh_runner.go:195] Run: grep 192.168.50.221	control-plane.minikube.internal$ /etc/hosts
	I0731 20:59:06.264718  188266 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.221	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:06.278173  188266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:06.423941  188266 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:59:06.441663  188266 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614 for IP: 192.168.50.221
	I0731 20:59:06.441689  188266 certs.go:194] generating shared ca certs ...
	I0731 20:59:06.441711  188266 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:06.441906  188266 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 20:59:06.441965  188266 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 20:59:06.441978  188266 certs.go:256] generating profile certs ...
	I0731 20:59:06.442080  188266 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/client.key
	I0731 20:59:06.442157  188266 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/apiserver.key.9cb12361
	I0731 20:59:06.442205  188266 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/proxy-client.key
	I0731 20:59:06.442354  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 20:59:06.442391  188266 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 20:59:06.442404  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 20:59:06.442447  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:59:06.442478  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:59:06.442522  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 20:59:06.442580  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:06.443470  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:59:06.497056  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:59:06.530978  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:59:06.574533  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 20:59:06.619523  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0731 20:59:06.648269  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 20:59:06.677824  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:59:06.704450  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 20:59:06.731606  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 20:59:06.756990  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 20:59:06.781214  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:59:06.804855  188266 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:59:06.821531  188266 ssh_runner.go:195] Run: openssl version
	I0731 20:59:06.827394  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 20:59:06.838680  188266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 20:59:06.843618  188266 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 20:59:06.843681  188266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 20:59:06.850238  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:59:06.865533  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:59:06.881516  188266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:06.886809  188266 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:06.886876  188266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:06.893345  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:59:06.908919  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 20:59:06.922150  188266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 20:59:06.927165  188266 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 20:59:06.927226  188266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 20:59:06.933724  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 20:59:06.946420  188266 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:59:06.951347  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 20:59:06.959595  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 20:59:06.967808  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 20:59:06.977083  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 20:59:06.985089  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 20:59:06.992190  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 20:59:06.998458  188266 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-125614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-125614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:59:06.998548  188266 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:59:06.998592  188266 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:07.053176  188266 cri.go:89] found id: ""
	I0731 20:59:07.053256  188266 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 20:59:07.064373  188266 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 20:59:07.064392  188266 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 20:59:07.064433  188266 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 20:59:07.075167  188266 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:59:07.076057  188266 kubeconfig.go:125] found "default-k8s-diff-port-125614" server: "https://192.168.50.221:8444"
	I0731 20:59:07.078091  188266 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 20:59:07.089136  188266 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.221
	I0731 20:59:07.089161  188266 kubeadm.go:1160] stopping kube-system containers ...
	I0731 20:59:07.089174  188266 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 20:59:07.089225  188266 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:07.133015  188266 cri.go:89] found id: ""
	I0731 20:59:07.133099  188266 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 20:59:07.155229  188266 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:59:07.166326  188266 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:59:07.166348  188266 kubeadm.go:157] found existing configuration files:
	
	I0731 20:59:07.166418  188266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0731 20:59:07.176709  188266 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:59:07.176768  188266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:59:07.187232  188266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0731 20:59:07.197376  188266 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:59:07.197453  188266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:59:07.209451  188266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0731 20:59:07.221141  188266 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:59:07.221205  188266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:59:07.232016  188266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0731 20:59:07.242340  188266 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:59:07.242402  188266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:59:07.253794  188266 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 20:59:07.264912  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:07.382193  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:08.445321  188266 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.063086935s)
	I0731 20:59:08.445364  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:08.664603  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:08.744053  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:08.857284  188266 api_server.go:52] waiting for apiserver process to appear ...
	I0731 20:59:08.857380  188266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:09.357505  188266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:09.857488  188266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:09.887329  188266 api_server.go:72] duration metric: took 1.030046485s to wait for apiserver process to appear ...
	I0731 20:59:09.887358  188266 api_server.go:88] waiting for apiserver healthz status ...
	I0731 20:59:09.887405  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:09.887966  188266 api_server.go:269] stopped: https://192.168.50.221:8444/healthz: Get "https://192.168.50.221:8444/healthz": dial tcp 192.168.50.221:8444: connect: connection refused
	I0731 20:59:10.387674  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:09.545937  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:09.546581  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:09.546605  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:09.546529  189574 retry.go:31] will retry after 1.934269586s: waiting for machine to come up
	I0731 20:59:11.482402  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:11.482794  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:11.482823  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:11.482744  189574 retry.go:31] will retry after 2.575131422s: waiting for machine to come up
	I0731 20:59:10.053236  188133 pod_ready.go:102] pod "coredns-5cfdc65f69-c9gcf" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:10.551437  188133 pod_ready.go:92] pod "coredns-5cfdc65f69-c9gcf" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:10.551467  188133 pod_ready.go:81] duration metric: took 5.006944467s for pod "coredns-5cfdc65f69-c9gcf" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:10.551480  188133 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:12.559346  188133 pod_ready.go:102] pod "etcd-no-preload-916885" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:12.827297  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:12.827342  188266 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:12.827390  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:12.883496  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:12.883538  188266 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:12.887715  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:12.902715  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:12.902746  188266 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:13.388340  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:13.392840  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:13.392872  188266 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:13.888510  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:13.894519  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:13.894553  188266 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:14.388177  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:14.392557  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 200:
	ok
	I0731 20:59:14.399285  188266 api_server.go:141] control plane version: v1.30.3
	I0731 20:59:14.399321  188266 api_server.go:131] duration metric: took 4.511955505s to wait for apiserver health ...
	I0731 20:59:14.399333  188266 cni.go:84] Creating CNI manager for ""
	I0731 20:59:14.399340  188266 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:14.400987  188266 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 20:59:14.401981  188266 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 20:59:14.420648  188266 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 20:59:14.441909  188266 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 20:59:14.451365  188266 system_pods.go:59] 8 kube-system pods found
	I0731 20:59:14.451406  188266 system_pods.go:61] "coredns-7db6d8ff4d-gnrgs" [203ddf96-11cf-4fd3-8920-aa787815ad1a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 20:59:14.451419  188266 system_pods.go:61] "etcd-default-k8s-diff-port-125614" [7b9a74f5-b8df-457d-b4be-26e3f268e74a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 20:59:14.451426  188266 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-125614" [a2db9a6a-1d7f-4bb6-9f54-2d74676bba7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 20:59:14.451432  188266 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-125614" [3cd52038-e450-46ea-91a7-494bdfeb386e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 20:59:14.451438  188266 system_pods.go:61] "kube-proxy-csdc4" [24077c7d-f54c-4a54-9791-742327f2a9d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 20:59:14.451444  188266 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-125614" [4b9b4f87-74a3-4768-b3ca-3226a4db7105] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 20:59:14.451461  188266 system_pods.go:61] "metrics-server-569cc877fc-jf52w" [00b07830-8180-43c0-83c7-e68d399ae0ef] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 20:59:14.451468  188266 system_pods.go:61] "storage-provisioner" [efc60c19-af1b-426e-82e2-5fb9a2d1fb3a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 20:59:14.451476  188266 system_pods.go:74] duration metric: took 9.546534ms to wait for pod list to return data ...
	I0731 20:59:14.451486  188266 node_conditions.go:102] verifying NodePressure condition ...
	I0731 20:59:14.454760  188266 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 20:59:14.454784  188266 node_conditions.go:123] node cpu capacity is 2
	I0731 20:59:14.454795  188266 node_conditions.go:105] duration metric: took 3.303087ms to run NodePressure ...
	I0731 20:59:14.454820  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:14.730635  188266 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 20:59:14.735144  188266 kubeadm.go:739] kubelet initialised
	I0731 20:59:14.735165  188266 kubeadm.go:740] duration metric: took 4.500388ms waiting for restarted kubelet to initialise ...
	I0731 20:59:14.735173  188266 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:59:14.742292  188266 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:14.749460  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.749486  188266 pod_ready.go:81] duration metric: took 7.166399ms for pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:14.749496  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.749504  188266 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:14.757068  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.757091  188266 pod_ready.go:81] duration metric: took 7.579526ms for pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:14.757101  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.757109  188266 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:14.762181  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.762203  188266 pod_ready.go:81] duration metric: took 5.083756ms for pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:14.762213  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.762219  188266 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:14.845070  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.845095  188266 pod_ready.go:81] duration metric: took 82.86894ms for pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:14.845107  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.845113  188266 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-csdc4" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:15.246100  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "kube-proxy-csdc4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:15.246131  188266 pod_ready.go:81] duration metric: took 401.011321ms for pod "kube-proxy-csdc4" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:15.246150  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "kube-proxy-csdc4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:15.246159  188266 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:15.645657  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:15.645689  188266 pod_ready.go:81] duration metric: took 399.519543ms for pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:15.645704  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:15.645713  188266 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:16.045744  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:16.045776  188266 pod_ready.go:81] duration metric: took 400.053102ms for pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:16.045791  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:16.045800  188266 pod_ready.go:38] duration metric: took 1.310615323s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:59:16.045838  188266 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 20:59:16.059046  188266 ops.go:34] apiserver oom_adj: -16
	I0731 20:59:16.059071  188266 kubeadm.go:597] duration metric: took 8.994671774s to restartPrimaryControlPlane
	I0731 20:59:16.059082  188266 kubeadm.go:394] duration metric: took 9.060633072s to StartCluster
	I0731 20:59:16.059104  188266 settings.go:142] acquiring lock: {Name:mk1f43113b3e416d694056e4890ac819b020378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:16.059181  188266 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 20:59:16.060895  188266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:16.061143  188266 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 20:59:16.061226  188266 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 20:59:16.061324  188266 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-125614"
	I0731 20:59:16.061386  188266 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-125614"
	W0731 20:59:16.061399  188266 addons.go:243] addon storage-provisioner should already be in state true
	I0731 20:59:16.061388  188266 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-125614"
	I0731 20:59:16.061400  188266 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-125614"
	I0731 20:59:16.061453  188266 config.go:182] Loaded profile config "default-k8s-diff-port-125614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:59:16.061495  188266 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-125614"
	W0731 20:59:16.061516  188266 addons.go:243] addon metrics-server should already be in state true
	I0731 20:59:16.061438  188266 host.go:66] Checking if "default-k8s-diff-port-125614" exists ...
	I0731 20:59:16.061603  188266 host.go:66] Checking if "default-k8s-diff-port-125614" exists ...
	I0731 20:59:16.061436  188266 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-125614"
	I0731 20:59:16.062072  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.062084  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.062085  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.062110  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.062127  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.062188  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.062822  188266 out.go:177] * Verifying Kubernetes components...
	I0731 20:59:16.064337  188266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:16.081194  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45591
	I0731 20:59:16.081208  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35871
	I0731 20:59:16.081197  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38383
	I0731 20:59:16.081872  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.081956  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.082026  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.082423  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.082439  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.082926  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.082951  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.083047  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.083058  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.083076  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.083712  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.083754  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.084871  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.085484  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.085734  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetState
	I0731 20:59:16.085815  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.085845  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.089827  188266 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-125614"
	W0731 20:59:16.089854  188266 addons.go:243] addon default-storageclass should already be in state true
	I0731 20:59:16.089884  188266 host.go:66] Checking if "default-k8s-diff-port-125614" exists ...
	I0731 20:59:16.090245  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.090301  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.106592  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38845
	I0731 20:59:16.106609  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34433
	I0731 20:59:16.108751  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.108849  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.109414  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.109442  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.109546  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.109576  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.109948  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.109953  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.110132  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetState
	I0731 20:59:16.110163  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetState
	I0731 20:59:16.111216  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36309
	I0731 20:59:16.111657  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.112217  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.112239  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.112319  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:16.113374  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.115608  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:16.115649  188266 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:59:16.115940  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.115979  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.116965  188266 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 20:59:16.117053  188266 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 20:59:16.117069  188266 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 20:59:16.117083  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:16.118247  188266 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 20:59:16.118268  188266 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 20:59:16.118288  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:16.120985  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.121540  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:16.121563  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.121764  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:16.121865  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.122099  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:16.122295  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:16.122371  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:16.122490  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.122552  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:16.122632  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:16.122850  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:16.123024  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:16.123218  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:16.133929  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34157
	I0731 20:59:16.134348  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.134844  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.134865  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.135175  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.135389  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetState
	I0731 20:59:16.136985  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:16.137272  188266 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 20:59:16.137287  188266 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 20:59:16.137313  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:16.140222  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.140543  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:16.140560  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:16.140762  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.140795  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:16.140969  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:16.141107  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:16.257677  188266 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:59:16.275791  188266 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-125614" to be "Ready" ...
	I0731 20:59:16.373528  188266 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 20:59:16.373552  188266 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 20:59:16.380797  188266 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 20:59:16.404028  188266 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 20:59:16.406072  188266 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 20:59:16.406098  188266 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 20:59:16.456003  188266 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 20:59:16.456030  188266 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 20:59:16.517304  188266 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 20:59:17.377438  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.377468  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.377514  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.377565  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.377765  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.377780  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.377790  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.377797  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.377827  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.377835  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.377930  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.378028  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.378028  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Closing plugin on server side
	I0731 20:59:17.378354  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Closing plugin on server side
	I0731 20:59:17.378417  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.378424  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.378569  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.378583  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.384110  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.384130  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.384325  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.384341  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.428457  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.428480  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.428766  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.428782  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.428790  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.428799  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.428804  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Closing plugin on server side
	I0731 20:59:17.429011  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.429024  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.429040  188266 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-125614"
	I0731 20:59:17.431884  188266 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 20:59:14.059385  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:14.059857  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:14.059879  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:14.059819  189574 retry.go:31] will retry after 3.127857327s: waiting for machine to come up
	I0731 20:59:17.189405  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:17.189871  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:17.189902  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:17.189821  189574 retry.go:31] will retry after 4.516767425s: waiting for machine to come up
	I0731 20:59:14.559493  188133 pod_ready.go:102] pod "etcd-no-preload-916885" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:16.561540  188133 pod_ready.go:92] pod "etcd-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:16.561568  188133 pod_ready.go:81] duration metric: took 6.010079286s for pod "etcd-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:16.561580  188133 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.068734  188133 pod_ready.go:92] pod "kube-apiserver-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:18.068756  188133 pod_ready.go:81] duration metric: took 1.507167128s for pod "kube-apiserver-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.068766  188133 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.073069  188133 pod_ready.go:92] pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:18.073086  188133 pod_ready.go:81] duration metric: took 4.313817ms for pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.073095  188133 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-99jgm" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.077480  188133 pod_ready.go:92] pod "kube-proxy-99jgm" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:18.077497  188133 pod_ready.go:81] duration metric: took 4.395483ms for pod "kube-proxy-99jgm" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.077506  188133 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.082197  188133 pod_ready.go:92] pod "kube-scheduler-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:18.082221  188133 pod_ready.go:81] duration metric: took 4.709042ms for pod "kube-scheduler-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.082234  188133 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:17.433072  188266 addons.go:510] duration metric: took 1.371850333s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 20:59:18.280135  188266 node_ready.go:53] node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:20.280881  188266 node_ready.go:53] node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:23.082812  187862 start.go:364] duration metric: took 58.27194035s to acquireMachinesLock for "embed-certs-831240"
	I0731 20:59:23.082866  187862 start.go:96] Skipping create...Using existing machine configuration
	I0731 20:59:23.082875  187862 fix.go:54] fixHost starting: 
	I0731 20:59:23.083267  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:23.083308  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:23.101291  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36679
	I0731 20:59:23.101826  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:23.102464  187862 main.go:141] libmachine: Using API Version  1
	I0731 20:59:23.102498  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:23.102817  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:23.103024  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:23.103187  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetState
	I0731 20:59:23.105117  187862 fix.go:112] recreateIfNeeded on embed-certs-831240: state=Stopped err=<nil>
	I0731 20:59:23.105143  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	W0731 20:59:23.105307  187862 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 20:59:23.106919  187862 out.go:177] * Restarting existing kvm2 VM for "embed-certs-831240" ...
	I0731 20:59:21.708296  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.708811  188656 main.go:141] libmachine: (old-k8s-version-239115) Found IP for machine: 192.168.61.51
	I0731 20:59:21.708846  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has current primary IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.708860  188656 main.go:141] libmachine: (old-k8s-version-239115) Reserving static IP address...
	I0731 20:59:21.709432  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "old-k8s-version-239115", mac: "52:54:00:5a:70:0d", ip: "192.168.61.51"} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.709663  188656 main.go:141] libmachine: (old-k8s-version-239115) Reserved static IP address: 192.168.61.51
	I0731 20:59:21.709695  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | skip adding static IP to network mk-old-k8s-version-239115 - found existing host DHCP lease matching {name: "old-k8s-version-239115", mac: "52:54:00:5a:70:0d", ip: "192.168.61.51"}
	I0731 20:59:21.709711  188656 main.go:141] libmachine: (old-k8s-version-239115) Waiting for SSH to be available...
	I0731 20:59:21.709723  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | Getting to WaitForSSH function...
	I0731 20:59:21.711911  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.712310  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.712345  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.712517  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | Using SSH client type: external
	I0731 20:59:21.712540  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa (-rw-------)
	I0731 20:59:21.712581  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:59:21.712598  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | About to run SSH command:
	I0731 20:59:21.712625  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | exit 0
	I0731 20:59:21.838026  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | SSH cmd err, output: <nil>: 
	I0731 20:59:21.838370  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetConfigRaw
	I0731 20:59:21.839169  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetIP
	I0731 20:59:21.842168  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.842588  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.842623  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.842866  188656 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/config.json ...
	I0731 20:59:21.843126  188656 machine.go:94] provisionDockerMachine start ...
	I0731 20:59:21.843150  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:21.843388  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:21.846148  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.846657  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.846686  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.846993  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:21.847165  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:21.847360  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:21.847530  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:21.847707  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:21.847938  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:21.847951  188656 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 20:59:21.955109  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 20:59:21.955143  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetMachineName
	I0731 20:59:21.955460  188656 buildroot.go:166] provisioning hostname "old-k8s-version-239115"
	I0731 20:59:21.955492  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetMachineName
	I0731 20:59:21.955728  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:21.958752  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.959146  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.959176  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.959395  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:21.959620  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:21.959781  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:21.959918  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:21.960078  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:21.960358  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:21.960378  188656 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-239115 && echo "old-k8s-version-239115" | sudo tee /etc/hostname
	I0731 20:59:22.090625  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-239115
	
	I0731 20:59:22.090665  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.093927  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.094356  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.094387  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.094729  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.094942  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.095153  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.095364  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.095583  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:22.095819  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:22.095845  188656 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-239115' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-239115/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-239115' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:59:22.217153  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:59:22.217189  188656 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 20:59:22.217215  188656 buildroot.go:174] setting up certificates
	I0731 20:59:22.217229  188656 provision.go:84] configureAuth start
	I0731 20:59:22.217242  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetMachineName
	I0731 20:59:22.217613  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetIP
	I0731 20:59:22.220640  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.221082  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.221125  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.221237  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.223811  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.224152  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.224180  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.224337  188656 provision.go:143] copyHostCerts
	I0731 20:59:22.224405  188656 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 20:59:22.224418  188656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 20:59:22.224485  188656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 20:59:22.224604  188656 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 20:59:22.224616  188656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 20:59:22.224654  188656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 20:59:22.224729  188656 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 20:59:22.224740  188656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 20:59:22.224766  188656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 20:59:22.224833  188656 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-239115 san=[127.0.0.1 192.168.61.51 localhost minikube old-k8s-version-239115]
	I0731 20:59:22.407532  188656 provision.go:177] copyRemoteCerts
	I0731 20:59:22.407599  188656 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:59:22.407625  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.410594  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.411007  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.411033  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.411338  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.411582  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.411811  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.412007  188656 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa Username:docker}
	I0731 20:59:22.492781  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:59:22.518278  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0731 20:59:22.543018  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 20:59:22.568888  188656 provision.go:87] duration metric: took 351.643ms to configureAuth
	I0731 20:59:22.568920  188656 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:59:22.569099  188656 config.go:182] Loaded profile config "old-k8s-version-239115": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 20:59:22.569169  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.572154  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.572471  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.572500  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.572669  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.572872  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.572993  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.573112  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.573249  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:22.573481  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:22.573512  188656 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:59:22.847156  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:59:22.847193  188656 machine.go:97] duration metric: took 1.004049055s to provisionDockerMachine
	I0731 20:59:22.847211  188656 start.go:293] postStartSetup for "old-k8s-version-239115" (driver="kvm2")
	I0731 20:59:22.847229  188656 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:59:22.847284  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:22.847710  188656 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:59:22.847741  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.850515  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.850935  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.850962  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.851088  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.851288  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.851524  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.851674  188656 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa Username:docker}
	I0731 20:59:22.932316  188656 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:59:22.936672  188656 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:59:22.936707  188656 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 20:59:22.936792  188656 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 20:59:22.936894  188656 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 20:59:22.937011  188656 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:59:22.946454  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:22.972952  188656 start.go:296] duration metric: took 125.72216ms for postStartSetup
	I0731 20:59:22.972996  188656 fix.go:56] duration metric: took 22.554695114s for fixHost
	I0731 20:59:22.973026  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.975758  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.976166  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.976198  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.976320  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.976585  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.976782  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.976966  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.977115  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:22.977275  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:22.977284  188656 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:59:23.082657  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722459563.026856067
	
	I0731 20:59:23.082683  188656 fix.go:216] guest clock: 1722459563.026856067
	I0731 20:59:23.082694  188656 fix.go:229] Guest: 2024-07-31 20:59:23.026856067 +0000 UTC Remote: 2024-07-31 20:59:22.973000729 +0000 UTC m=+249.171273714 (delta=53.855338ms)
	I0731 20:59:23.082721  188656 fix.go:200] guest clock delta is within tolerance: 53.855338ms
	I0731 20:59:23.082727  188656 start.go:83] releasing machines lock for "old-k8s-version-239115", held for 22.664459101s
	I0731 20:59:23.082752  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:23.083052  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetIP
	I0731 20:59:23.086626  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.087093  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:23.087135  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.087366  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:23.087954  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:23.088159  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:23.088251  188656 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:59:23.088303  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:23.088370  188656 ssh_runner.go:195] Run: cat /version.json
	I0731 20:59:23.088392  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:23.091710  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.091989  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.092073  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:23.092101  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.092227  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:23.092429  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:23.092472  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:23.092520  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.092618  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:23.092752  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:23.092803  188656 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa Username:docker}
	I0731 20:59:23.092931  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:23.093100  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:23.093255  188656 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa Username:docker}
	I0731 20:59:23.175012  188656 ssh_runner.go:195] Run: systemctl --version
	I0731 20:59:23.200192  188656 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:59:23.348227  188656 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:59:23.355109  188656 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:59:23.355195  188656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:59:23.371683  188656 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:59:23.371707  188656 start.go:495] detecting cgroup driver to use...
	I0731 20:59:23.371786  188656 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:59:23.388727  188656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:59:23.408830  188656 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:59:23.408907  188656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:59:23.423594  188656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:59:23.437876  188656 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:59:23.559105  188656 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:59:23.743186  188656 docker.go:233] disabling docker service ...
	I0731 20:59:23.743253  188656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:59:23.758053  188656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:59:23.779951  188656 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:59:20.089173  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:22.092138  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:23.919494  188656 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:59:24.057230  188656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:59:24.072687  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:59:24.094528  188656 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 20:59:24.094600  188656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:24.106579  188656 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:59:24.106634  188656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:24.120079  188656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:24.130759  188656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:24.142925  188656 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:59:24.154760  188656 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:59:24.165059  188656 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:59:24.165113  188656 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:59:24.179567  188656 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:59:24.191838  188656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:24.339078  188656 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:59:24.515723  188656 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:59:24.515810  188656 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:59:24.521882  188656 start.go:563] Will wait 60s for crictl version
	I0731 20:59:24.521966  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:24.527655  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:59:24.581055  188656 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:59:24.581151  188656 ssh_runner.go:195] Run: crio --version
	I0731 20:59:24.623207  188656 ssh_runner.go:195] Run: crio --version
	I0731 20:59:24.662956  188656 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0731 20:59:22.780311  188266 node_ready.go:53] node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:23.281324  188266 node_ready.go:49] node "default-k8s-diff-port-125614" has status "Ready":"True"
	I0731 20:59:23.281373  188266 node_ready.go:38] duration metric: took 7.005540469s for node "default-k8s-diff-port-125614" to be "Ready" ...
	I0731 20:59:23.281387  188266 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:59:23.291207  188266 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.299173  188266 pod_ready.go:92] pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:23.299202  188266 pod_ready.go:81] duration metric: took 7.971632ms for pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.299215  188266 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.307561  188266 pod_ready.go:92] pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:23.307580  188266 pod_ready.go:81] duration metric: took 8.357239ms for pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.307589  188266 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.314466  188266 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:23.314544  188266 pod_ready.go:81] duration metric: took 6.946044ms for pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.314565  188266 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:25.323341  188266 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:23.108292  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Start
	I0731 20:59:23.108473  187862 main.go:141] libmachine: (embed-certs-831240) Ensuring networks are active...
	I0731 20:59:23.109160  187862 main.go:141] libmachine: (embed-certs-831240) Ensuring network default is active
	I0731 20:59:23.109575  187862 main.go:141] libmachine: (embed-certs-831240) Ensuring network mk-embed-certs-831240 is active
	I0731 20:59:23.110032  187862 main.go:141] libmachine: (embed-certs-831240) Getting domain xml...
	I0731 20:59:23.110762  187862 main.go:141] libmachine: (embed-certs-831240) Creating domain...
	I0731 20:59:24.457926  187862 main.go:141] libmachine: (embed-certs-831240) Waiting to get IP...
	I0731 20:59:24.458936  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:24.459381  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:24.459477  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:24.459375  189758 retry.go:31] will retry after 266.695372ms: waiting for machine to come up
	I0731 20:59:24.727938  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:24.728394  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:24.728532  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:24.728451  189758 retry.go:31] will retry after 349.84093ms: waiting for machine to come up
	I0731 20:59:25.080044  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:25.080634  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:25.080668  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:25.080592  189758 retry.go:31] will retry after 324.555122ms: waiting for machine to come up
	I0731 20:59:25.407332  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:25.407852  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:25.407877  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:25.407795  189758 retry.go:31] will retry after 580.815897ms: waiting for machine to come up
	I0731 20:59:25.990957  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:25.991551  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:25.991578  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:25.991468  189758 retry.go:31] will retry after 570.045476ms: waiting for machine to come up
	I0731 20:59:26.563493  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:26.563901  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:26.563931  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:26.563853  189758 retry.go:31] will retry after 582.597352ms: waiting for machine to come up
	I0731 20:59:27.148256  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:27.148744  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:27.148773  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:27.148688  189758 retry.go:31] will retry after 1.105713474s: waiting for machine to come up
	I0731 20:59:24.664851  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetIP
	I0731 20:59:24.668464  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:24.668842  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:24.668869  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:24.669103  188656 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0731 20:59:24.674448  188656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:24.690857  188656 kubeadm.go:883] updating cluster {Name:old-k8s-version-239115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-239115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:59:24.691011  188656 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 20:59:24.691056  188656 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:24.744259  188656 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 20:59:24.744348  188656 ssh_runner.go:195] Run: which lz4
	I0731 20:59:24.749358  188656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 20:59:24.754299  188656 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 20:59:24.754341  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0731 20:59:26.551495  188656 crio.go:462] duration metric: took 1.802206904s to copy over tarball
	I0731 20:59:26.551571  188656 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 20:59:24.589677  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:26.591079  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:29.089923  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:25.824008  188266 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:25.824037  188266 pod_ready.go:81] duration metric: took 2.509461823s for pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:25.824052  188266 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-csdc4" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:25.840569  188266 pod_ready.go:92] pod "kube-proxy-csdc4" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:25.840595  188266 pod_ready.go:81] duration metric: took 16.533543ms for pod "kube-proxy-csdc4" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:25.840613  188266 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:26.103726  188266 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:26.103759  188266 pod_ready.go:81] duration metric: took 263.1364ms for pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:26.103774  188266 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:28.112583  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:30.610462  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:28.255818  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:28.256478  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:28.256506  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:28.256408  189758 retry.go:31] will retry after 1.3552249s: waiting for machine to come up
	I0731 20:59:29.613070  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:29.613661  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:29.613693  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:29.613620  189758 retry.go:31] will retry after 1.522319436s: waiting for machine to come up
	I0731 20:59:31.138020  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:31.138490  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:31.138522  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:31.138434  189758 retry.go:31] will retry after 1.573723862s: waiting for machine to come up
	I0731 20:59:29.653941  188656 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.102337952s)
	I0731 20:59:29.653974  188656 crio.go:469] duration metric: took 3.102444338s to extract the tarball
	I0731 20:59:29.653982  188656 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 20:59:29.704065  188656 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:29.745966  188656 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 20:59:29.746010  188656 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 20:59:29.746076  188656 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:59:29.746107  188656 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:29.746129  188656 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:29.746149  188656 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0731 20:59:29.746170  188656 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0731 20:59:29.746410  188656 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:29.746423  188656 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:29.746735  188656 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:29.747951  188656 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 20:59:29.747978  188656 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0731 20:59:29.747978  188656 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:29.747998  188656 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:29.748005  188656 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:29.747951  188656 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:59:29.748021  188656 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:29.748091  188656 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:29.915865  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:29.918049  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:29.950840  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:29.952762  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:29.956317  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:29.959905  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0731 20:59:30.000707  188656 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0731 20:59:30.000768  188656 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:30.000821  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.007207  188656 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0731 20:59:30.007251  188656 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:30.007294  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.016613  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0731 20:59:30.082306  188656 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0731 20:59:30.082358  188656 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:30.082364  188656 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0731 20:59:30.082414  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.082418  188656 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:30.082557  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.089299  188656 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0731 20:59:30.089382  188656 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:30.089427  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.105150  188656 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0731 20:59:30.105217  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:30.105246  188656 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0731 20:59:30.105264  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:30.105282  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.129702  188656 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0731 20:59:30.129748  188656 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0731 20:59:30.129779  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:30.129826  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:30.129853  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:30.129800  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.188192  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 20:59:30.188243  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0731 20:59:30.188342  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0731 20:59:30.188365  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0731 20:59:30.268231  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0731 20:59:30.268296  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0731 20:59:30.268337  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0731 20:59:30.287822  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0731 20:59:30.287929  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0731 20:59:30.635440  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:59:30.776879  188656 cache_images.go:92] duration metric: took 1.030849977s to LoadCachedImages
	W0731 20:59:30.777006  188656 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0731 20:59:30.777028  188656 kubeadm.go:934] updating node { 192.168.61.51 8443 v1.20.0 crio true true} ...
	I0731 20:59:30.777175  188656 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-239115 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:59:30.777284  188656 ssh_runner.go:195] Run: crio config
	I0731 20:59:30.832542  188656 cni.go:84] Creating CNI manager for ""
	I0731 20:59:30.832570  188656 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:30.832586  188656 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:59:30.832618  188656 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-239115 NodeName:old-k8s-version-239115 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 20:59:30.832798  188656 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-239115"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:59:30.832877  188656 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0731 20:59:30.842909  188656 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:59:30.842995  188656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 20:59:30.852951  188656 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0731 20:59:30.872643  188656 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:59:30.889851  188656 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0731 20:59:30.910958  188656 ssh_runner.go:195] Run: grep 192.168.61.51	control-plane.minikube.internal$ /etc/hosts
	I0731 20:59:30.915645  188656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:30.928698  188656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:31.055628  188656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:59:31.076731  188656 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115 for IP: 192.168.61.51
	I0731 20:59:31.076759  188656 certs.go:194] generating shared ca certs ...
	I0731 20:59:31.076789  188656 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:31.076979  188656 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 20:59:31.077041  188656 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 20:59:31.077057  188656 certs.go:256] generating profile certs ...
	I0731 20:59:31.077175  188656 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/client.key
	I0731 20:59:31.077378  188656 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/apiserver.key.072d7f83
	I0731 20:59:31.077514  188656 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/proxy-client.key
	I0731 20:59:31.077704  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 20:59:31.077789  188656 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 20:59:31.077806  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 20:59:31.077854  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:59:31.077892  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:59:31.077932  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 20:59:31.077997  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:31.078906  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:59:31.126980  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:59:31.167327  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:59:31.211947  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 20:59:31.258307  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 20:59:31.296628  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 20:59:31.342330  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:59:31.391114  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 20:59:31.415097  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 20:59:31.442595  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 20:59:31.472160  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:59:31.497814  188656 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:59:31.515890  188656 ssh_runner.go:195] Run: openssl version
	I0731 20:59:31.523423  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:59:31.537984  188656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:31.544161  188656 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:31.544225  188656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:31.552590  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:59:31.567190  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 20:59:31.581206  188656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 20:59:31.586903  188656 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 20:59:31.586966  188656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 20:59:31.593485  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 20:59:31.606764  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 20:59:31.619748  188656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 20:59:31.624599  188656 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 20:59:31.624681  188656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 20:59:31.631293  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:59:31.642823  188656 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:59:31.647273  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 20:59:31.653142  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 20:59:31.659046  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 20:59:31.665552  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 20:59:31.671454  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 20:59:31.677426  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 20:59:31.683490  188656 kubeadm.go:392] StartCluster: {Name:old-k8s-version-239115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-239115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:59:31.683586  188656 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:59:31.683625  188656 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:31.725466  188656 cri.go:89] found id: ""
	I0731 20:59:31.725548  188656 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 20:59:31.737025  188656 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 20:59:31.737050  188656 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 20:59:31.737113  188656 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 20:59:31.747325  188656 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:59:31.748325  188656 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-239115" does not appear in /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 20:59:31.748965  188656 kubeconfig.go:62] /home/jenkins/minikube-integration/19355-121704/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-239115" cluster setting kubeconfig missing "old-k8s-version-239115" context setting]
	I0731 20:59:31.749997  188656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:31.757569  188656 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 20:59:31.771188  188656 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.51
	I0731 20:59:31.771222  188656 kubeadm.go:1160] stopping kube-system containers ...
	I0731 20:59:31.771236  188656 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 20:59:31.771292  188656 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:31.811574  188656 cri.go:89] found id: ""
	I0731 20:59:31.811653  188656 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 20:59:31.829930  188656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:59:31.840145  188656 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:59:31.840165  188656 kubeadm.go:157] found existing configuration files:
	
	I0731 20:59:31.840206  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 20:59:31.851266  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:59:31.851340  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:59:31.861634  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 20:59:31.871532  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:59:31.871605  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:59:31.882164  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 20:59:31.892222  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:59:31.892291  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:59:31.903299  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 20:59:31.916163  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:59:31.916235  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:59:31.929423  188656 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 20:59:31.942668  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:32.107220  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:32.953249  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:33.207806  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:33.307640  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:33.410338  188656 api_server.go:52] waiting for apiserver process to appear ...
	I0731 20:59:33.410444  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:31.221009  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:33.589275  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:32.612024  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:35.109601  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:32.713632  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:32.714137  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:32.714169  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:32.714064  189758 retry.go:31] will retry after 2.013485748s: waiting for machine to come up
	I0731 20:59:34.729625  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:34.730006  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:34.730070  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:34.729970  189758 retry.go:31] will retry after 2.193072749s: waiting for machine to come up
	I0731 20:59:36.924345  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:36.924990  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:36.925008  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:36.924940  189758 retry.go:31] will retry after 3.394781674s: waiting for machine to come up
	I0731 20:59:33.910958  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:34.411011  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:34.911110  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:35.410715  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:35.911117  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:36.410825  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:36.911311  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:37.410757  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:37.910786  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:38.410821  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:36.089622  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:38.589435  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:37.110446  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:39.111323  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:40.322463  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:40.322827  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:40.322857  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:40.322774  189758 retry.go:31] will retry after 3.836613891s: waiting for machine to come up
	I0731 20:59:38.910891  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:39.411547  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:39.911260  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:40.411404  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:40.910719  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:41.411449  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:41.910643  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:42.410967  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:42.910703  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:43.411187  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:41.088768  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:43.589256  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:41.609891  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:44.111379  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:44.160516  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.161009  187862 main.go:141] libmachine: (embed-certs-831240) Found IP for machine: 192.168.39.92
	I0731 20:59:44.161029  187862 main.go:141] libmachine: (embed-certs-831240) Reserving static IP address...
	I0731 20:59:44.161041  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has current primary IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.161561  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "embed-certs-831240", mac: "52:54:00:ff:69:a6", ip: "192.168.39.92"} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.161594  187862 main.go:141] libmachine: (embed-certs-831240) DBG | skip adding static IP to network mk-embed-certs-831240 - found existing host DHCP lease matching {name: "embed-certs-831240", mac: "52:54:00:ff:69:a6", ip: "192.168.39.92"}
	I0731 20:59:44.161609  187862 main.go:141] libmachine: (embed-certs-831240) Reserved static IP address: 192.168.39.92
	I0731 20:59:44.161623  187862 main.go:141] libmachine: (embed-certs-831240) Waiting for SSH to be available...
	I0731 20:59:44.161638  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Getting to WaitForSSH function...
	I0731 20:59:44.163936  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.164285  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.164318  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.164447  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Using SSH client type: external
	I0731 20:59:44.164479  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa (-rw-------)
	I0731 20:59:44.164499  187862 main.go:141] libmachine: (embed-certs-831240) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.92 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:59:44.164510  187862 main.go:141] libmachine: (embed-certs-831240) DBG | About to run SSH command:
	I0731 20:59:44.164544  187862 main.go:141] libmachine: (embed-certs-831240) DBG | exit 0
	I0731 20:59:44.293463  187862 main.go:141] libmachine: (embed-certs-831240) DBG | SSH cmd err, output: <nil>: 
	I0731 20:59:44.293819  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetConfigRaw
	I0731 20:59:44.294490  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetIP
	I0731 20:59:44.296982  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.297351  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.297381  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.297634  187862 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/config.json ...
	I0731 20:59:44.297877  187862 machine.go:94] provisionDockerMachine start ...
	I0731 20:59:44.297897  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:44.298116  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.300452  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.300806  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.300829  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.300953  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:44.301146  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.301308  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.301439  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:44.301634  187862 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:44.301811  187862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0731 20:59:44.301823  187862 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 20:59:44.418065  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 20:59:44.418105  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetMachineName
	I0731 20:59:44.418428  187862 buildroot.go:166] provisioning hostname "embed-certs-831240"
	I0731 20:59:44.418446  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetMachineName
	I0731 20:59:44.418666  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.421984  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.422403  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.422434  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.422568  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:44.422733  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.422893  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.423023  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:44.423208  187862 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:44.423371  187862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0731 20:59:44.423410  187862 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-831240 && echo "embed-certs-831240" | sudo tee /etc/hostname
	I0731 20:59:44.549670  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-831240
	
	I0731 20:59:44.549697  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.552503  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.552851  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.552876  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.553017  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:44.553200  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.553398  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.553533  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:44.553721  187862 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:44.554012  187862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0731 20:59:44.554039  187862 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-831240' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-831240/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-831240' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:59:44.674662  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:59:44.674693  187862 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 20:59:44.674713  187862 buildroot.go:174] setting up certificates
	I0731 20:59:44.674723  187862 provision.go:84] configureAuth start
	I0731 20:59:44.674733  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetMachineName
	I0731 20:59:44.675011  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetIP
	I0731 20:59:44.677631  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.677911  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.677951  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.678081  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.679869  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.680177  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.680205  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.680332  187862 provision.go:143] copyHostCerts
	I0731 20:59:44.680391  187862 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 20:59:44.680401  187862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 20:59:44.680450  187862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 20:59:44.680537  187862 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 20:59:44.680545  187862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 20:59:44.680564  187862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 20:59:44.680628  187862 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 20:59:44.680635  187862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 20:59:44.680652  187862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 20:59:44.680711  187862 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.embed-certs-831240 san=[127.0.0.1 192.168.39.92 embed-certs-831240 localhost minikube]
	I0731 20:59:44.733872  187862 provision.go:177] copyRemoteCerts
	I0731 20:59:44.733927  187862 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:59:44.733951  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.736399  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.736731  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.736758  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.736935  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:44.737131  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.737273  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:44.737430  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 20:59:44.824050  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:59:44.847699  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0731 20:59:44.872138  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 20:59:44.896013  187862 provision.go:87] duration metric: took 221.275458ms to configureAuth
	I0731 20:59:44.896042  187862 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:59:44.896234  187862 config.go:182] Loaded profile config "embed-certs-831240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:59:44.896327  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.898820  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.899206  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.899232  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.899457  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:44.899660  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.899822  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.899993  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:44.900216  187862 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:44.900438  187862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0731 20:59:44.900462  187862 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:59:45.179165  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:59:45.179194  187862 machine.go:97] duration metric: took 881.302407ms to provisionDockerMachine
	I0731 20:59:45.179213  187862 start.go:293] postStartSetup for "embed-certs-831240" (driver="kvm2")
	I0731 20:59:45.179226  187862 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:59:45.179252  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:45.179615  187862 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:59:45.179646  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:45.182617  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.183047  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:45.183069  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.183284  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:45.183510  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:45.183654  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:45.183805  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 20:59:45.273492  187862 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:59:45.277593  187862 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:59:45.277618  187862 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 20:59:45.277687  187862 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 20:59:45.277782  187862 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 20:59:45.277889  187862 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:59:45.288172  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:45.311763  187862 start.go:296] duration metric: took 132.534326ms for postStartSetup
	I0731 20:59:45.311803  187862 fix.go:56] duration metric: took 22.228928797s for fixHost
	I0731 20:59:45.311827  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:45.314578  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.314962  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:45.314998  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.315146  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:45.315381  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:45.315549  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:45.315681  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:45.315868  187862 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:45.316035  187862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0731 20:59:45.316045  187862 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:59:45.426289  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722459585.381297707
	
	I0731 20:59:45.426314  187862 fix.go:216] guest clock: 1722459585.381297707
	I0731 20:59:45.426324  187862 fix.go:229] Guest: 2024-07-31 20:59:45.381297707 +0000 UTC Remote: 2024-07-31 20:59:45.311808006 +0000 UTC m=+363.090091892 (delta=69.489701ms)
	I0731 20:59:45.426379  187862 fix.go:200] guest clock delta is within tolerance: 69.489701ms
	I0731 20:59:45.426387  187862 start.go:83] releasing machines lock for "embed-certs-831240", held for 22.343543995s
	I0731 20:59:45.426419  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:45.426684  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetIP
	I0731 20:59:45.429330  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.429757  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:45.429785  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.429952  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:45.430453  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:45.430671  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:45.430790  187862 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:59:45.430854  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:45.430905  187862 ssh_runner.go:195] Run: cat /version.json
	I0731 20:59:45.430943  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:45.433850  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.434108  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.434192  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:45.434222  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.434385  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:45.434580  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:45.434584  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:45.434611  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.434760  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:45.434768  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:45.434939  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:45.434929  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 20:59:45.435099  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:45.435243  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 20:59:45.542122  187862 ssh_runner.go:195] Run: systemctl --version
	I0731 20:59:45.548583  187862 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:59:45.690235  187862 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:59:45.696897  187862 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:59:45.696986  187862 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:59:45.714456  187862 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:59:45.714480  187862 start.go:495] detecting cgroup driver to use...
	I0731 20:59:45.714546  187862 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:59:45.732184  187862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:59:45.747047  187862 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:59:45.747104  187862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:59:45.761152  187862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:59:45.775267  187862 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:59:45.890891  187862 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:59:46.043503  187862 docker.go:233] disabling docker service ...
	I0731 20:59:46.043577  187862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:59:46.058174  187862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:59:46.070900  187862 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:59:46.209527  187862 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:59:46.343868  187862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:59:46.357583  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:59:46.375819  187862 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 20:59:46.375875  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.386762  187862 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:59:46.386844  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.397495  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.407654  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.418326  187862 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:59:46.428983  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.439530  187862 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.457956  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.468003  187862 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:59:46.477332  187862 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:59:46.477400  187862 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:59:46.490886  187862 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:59:46.500516  187862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:46.617952  187862 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:59:46.761978  187862 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:59:46.762088  187862 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:59:46.767210  187862 start.go:563] Will wait 60s for crictl version
	I0731 20:59:46.767275  187862 ssh_runner.go:195] Run: which crictl
	I0731 20:59:46.771502  187862 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:59:46.810894  187862 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:59:46.810976  187862 ssh_runner.go:195] Run: crio --version
	I0731 20:59:46.839234  187862 ssh_runner.go:195] Run: crio --version
	I0731 20:59:46.871209  187862 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 20:59:46.872648  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetIP
	I0731 20:59:46.875374  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:46.875683  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:46.875698  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:46.875900  187862 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 20:59:46.880402  187862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:46.894098  187862 kubeadm.go:883] updating cluster {Name:embed-certs-831240 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-831240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:59:46.894238  187862 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:59:46.894300  187862 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:46.937003  187862 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 20:59:46.937079  187862 ssh_runner.go:195] Run: which lz4
	I0731 20:59:46.941158  187862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 20:59:46.945395  187862 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 20:59:46.945425  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 20:59:43.910997  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:44.410783  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:44.911365  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:45.410690  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:45.911150  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:46.411384  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:46.910579  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:47.411171  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:47.910578  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:48.411377  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:45.589690  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:47.591464  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:46.608955  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:48.611634  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:50.615557  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:48.414703  187862 crio.go:462] duration metric: took 1.473569222s to copy over tarball
	I0731 20:59:48.414789  187862 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 20:59:50.666750  187862 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.251926888s)
	I0731 20:59:50.666783  187862 crio.go:469] duration metric: took 2.252043688s to extract the tarball
	I0731 20:59:50.666793  187862 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 20:59:50.707188  187862 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:50.749781  187862 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 20:59:50.749808  187862 cache_images.go:84] Images are preloaded, skipping loading
	I0731 20:59:50.749817  187862 kubeadm.go:934] updating node { 192.168.39.92 8443 v1.30.3 crio true true} ...
	I0731 20:59:50.749923  187862 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-831240 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-831240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:59:50.749998  187862 ssh_runner.go:195] Run: crio config
	I0731 20:59:50.797191  187862 cni.go:84] Creating CNI manager for ""
	I0731 20:59:50.797214  187862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:50.797227  187862 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:59:50.797253  187862 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.92 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-831240 NodeName:embed-certs-831240 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.92"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.92 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 20:59:50.797484  187862 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.92
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-831240"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.92
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.92"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:59:50.797556  187862 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 20:59:50.808170  187862 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:59:50.808236  187862 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 20:59:50.817847  187862 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0731 20:59:50.834107  187862 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:59:50.849722  187862 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0731 20:59:50.866599  187862 ssh_runner.go:195] Run: grep 192.168.39.92	control-plane.minikube.internal$ /etc/hosts
	I0731 20:59:50.870727  187862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.92	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:50.884490  187862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:51.043488  187862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:59:51.064792  187862 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240 for IP: 192.168.39.92
	I0731 20:59:51.064816  187862 certs.go:194] generating shared ca certs ...
	I0731 20:59:51.064836  187862 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:51.065142  187862 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 20:59:51.065225  187862 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 20:59:51.065254  187862 certs.go:256] generating profile certs ...
	I0731 20:59:51.065443  187862 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/client.key
	I0731 20:59:51.065571  187862 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/apiserver.key.4e545c52
	I0731 20:59:51.065639  187862 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/proxy-client.key
	I0731 20:59:51.065798  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 20:59:51.065846  187862 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 20:59:51.065857  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 20:59:51.065883  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:59:51.065909  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:59:51.065929  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 20:59:51.065971  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:51.066633  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:59:51.107287  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:59:51.138745  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:59:51.176139  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 20:59:51.211344  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0731 20:59:51.241050  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 20:59:51.269307  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:59:51.293184  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 20:59:51.316745  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:59:51.343620  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 20:59:51.367293  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 20:59:51.391789  187862 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:59:51.413821  187862 ssh_runner.go:195] Run: openssl version
	I0731 20:59:51.420455  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 20:59:51.431721  187862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 20:59:51.436672  187862 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 20:59:51.436724  187862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 20:59:51.442604  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 20:59:51.453601  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 20:59:51.464109  187862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 20:59:51.468598  187862 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 20:59:51.468648  187862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 20:59:51.474333  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:59:51.484758  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:59:51.495093  187862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:51.499557  187862 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:51.499605  187862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:51.505244  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:59:51.515545  187862 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:59:51.519923  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 20:59:51.525696  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 20:59:51.531430  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 20:59:51.537082  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 20:59:51.542713  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 20:59:51.548206  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 20:59:51.553705  187862 kubeadm.go:392] StartCluster: {Name:embed-certs-831240 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-831240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:59:51.553793  187862 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:59:51.553841  187862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:51.592396  187862 cri.go:89] found id: ""
	I0731 20:59:51.592472  187862 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 20:59:51.602510  187862 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 20:59:51.602528  187862 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 20:59:51.602578  187862 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 20:59:51.612384  187862 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:59:51.613530  187862 kubeconfig.go:125] found "embed-certs-831240" server: "https://192.168.39.92:8443"
	I0731 20:59:51.615991  187862 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 20:59:51.625205  187862 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.92
	I0731 20:59:51.625239  187862 kubeadm.go:1160] stopping kube-system containers ...
	I0731 20:59:51.625253  187862 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 20:59:51.625307  187862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:51.663278  187862 cri.go:89] found id: ""
	I0731 20:59:51.663370  187862 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 20:59:51.678876  187862 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:59:51.688071  187862 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:59:51.688092  187862 kubeadm.go:157] found existing configuration files:
	
	I0731 20:59:51.688139  187862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 20:59:51.696441  187862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:59:51.696494  187862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:59:51.705310  187862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 20:59:51.713545  187862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:59:51.713599  187862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:59:51.723512  187862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 20:59:51.732304  187862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:59:51.732380  187862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:59:51.741301  187862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 20:59:51.749537  187862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:59:51.749583  187862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:59:51.758609  187862 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 20:59:51.774450  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:51.888916  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:48.910784  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:49.411137  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:49.911453  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:50.411128  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:50.911431  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:51.410483  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:51.910975  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:52.411519  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:52.911079  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:53.410802  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:50.094603  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:52.589951  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:53.424691  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:55.609675  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:52.666705  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:52.899759  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:52.975806  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:53.050422  187862 api_server.go:52] waiting for apiserver process to appear ...
	I0731 20:59:53.050493  187862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:53.551073  187862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:54.051427  187862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:54.551268  187862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:54.570361  187862 api_server.go:72] duration metric: took 1.519937245s to wait for apiserver process to appear ...
	I0731 20:59:54.570389  187862 api_server.go:88] waiting for apiserver healthz status ...
	I0731 20:59:54.570414  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:53.911405  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:54.410870  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:54.911330  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:55.411491  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:55.911380  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:56.411483  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:56.910602  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:57.411228  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:57.910486  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:58.411198  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:57.260421  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:57.260455  187862 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:57.260469  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:57.284265  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:57.284301  187862 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:57.570976  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:57.575616  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:57.575644  187862 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:58.071247  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:58.075871  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:58.075903  187862 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:58.570906  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:58.581990  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:58.582038  187862 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:59.070528  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:59.074787  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 200:
	ok
	I0731 20:59:59.081502  187862 api_server.go:141] control plane version: v1.30.3
	I0731 20:59:59.081541  187862 api_server.go:131] duration metric: took 4.511132973s to wait for apiserver health ...
	I0731 20:59:59.081552  187862 cni.go:84] Creating CNI manager for ""
	I0731 20:59:59.081561  187862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:59.083504  187862 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 20:59:55.089279  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:57.589380  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:59.084894  187862 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 20:59:59.098139  187862 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 20:59:59.118458  187862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 20:59:59.128022  187862 system_pods.go:59] 8 kube-system pods found
	I0731 20:59:59.128061  187862 system_pods.go:61] "coredns-7db6d8ff4d-2ks55" [f5ad9d76-5cdc-430e-8933-7e72a2dda95f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 20:59:59.128071  187862 system_pods.go:61] "etcd-embed-certs-831240" [5236ad06-90d9-48f1-964a-efa8f56ee8b5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 20:59:59.128082  187862 system_pods.go:61] "kube-apiserver-embed-certs-831240" [06290f48-0a7b-4d88-9e61-af4c7dde9acd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 20:59:59.128100  187862 system_pods.go:61] "kube-controller-manager-embed-certs-831240" [5d669fab-16cd-4bc6-a880-a1ecfb5d55b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 20:59:59.128113  187862 system_pods.go:61] "kube-proxy-x662j" [9ad0d8a8-94b4-4f3e-b5da-4e5585c28d21] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 20:59:59.128121  187862 system_pods.go:61] "kube-scheduler-embed-certs-831240" [cf9c5922-6468-46e7-84c1-1841f5bc3446] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 20:59:59.128134  187862 system_pods.go:61] "metrics-server-569cc877fc-slbkm" [f93f674b-1f0e-443b-ac06-9c2a5234eeea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 20:59:59.128145  187862 system_pods.go:61] "storage-provisioner" [d3d5fa24-96e8-4ab5-9887-62ff8b82f21d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 20:59:59.128156  187862 system_pods.go:74] duration metric: took 9.673815ms to wait for pod list to return data ...
	I0731 20:59:59.128168  187862 node_conditions.go:102] verifying NodePressure condition ...
	I0731 20:59:59.131825  187862 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 20:59:59.131853  187862 node_conditions.go:123] node cpu capacity is 2
	I0731 20:59:59.131865  187862 node_conditions.go:105] duration metric: took 3.691724ms to run NodePressure ...
	I0731 20:59:59.131897  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:59.494923  187862 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 20:59:59.501848  187862 kubeadm.go:739] kubelet initialised
	I0731 20:59:59.501875  187862 kubeadm.go:740] duration metric: took 6.920816ms waiting for restarted kubelet to initialise ...
	I0731 20:59:59.501885  187862 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:59:59.510503  187862 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:59.518204  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.518234  187862 pod_ready.go:81] duration metric: took 7.702873ms for pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:59.518247  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.518263  187862 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:59.523236  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "etcd-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.523258  187862 pod_ready.go:81] duration metric: took 4.985299ms for pod "etcd-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:59.523266  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "etcd-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.523275  187862 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:59.535237  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.535256  187862 pod_ready.go:81] duration metric: took 11.97449ms for pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:59.535270  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.535275  187862 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:59.541512  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.541531  187862 pod_ready.go:81] duration metric: took 6.24797ms for pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:59.541539  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.541545  187862 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-x662j" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:59.922722  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "kube-proxy-x662j" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.922757  187862 pod_ready.go:81] duration metric: took 381.203526ms for pod "kube-proxy-x662j" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:59.922771  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "kube-proxy-x662j" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.922779  187862 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:00.322049  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:00.322077  187862 pod_ready.go:81] duration metric: took 399.289505ms for pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	E0731 21:00:00.322088  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:00.322094  187862 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:00.722961  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:00.722993  187862 pod_ready.go:81] duration metric: took 400.88956ms for pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace to be "Ready" ...
	E0731 21:00:00.723008  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:00.723017  187862 pod_ready.go:38] duration metric: took 1.221112347s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:00:00.723050  187862 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:00:00.735642  187862 ops.go:34] apiserver oom_adj: -16
	I0731 21:00:00.735697  187862 kubeadm.go:597] duration metric: took 9.133136671s to restartPrimaryControlPlane
	I0731 21:00:00.735735  187862 kubeadm.go:394] duration metric: took 9.182030801s to StartCluster
	I0731 21:00:00.735764  187862 settings.go:142] acquiring lock: {Name:mk1f43113b3e416d694056e4890ac819b020378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:00:00.735860  187862 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 21:00:00.737955  187862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:00:00.738247  187862 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:00:00.738329  187862 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:00:00.738418  187862 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-831240"
	I0731 21:00:00.738432  187862 addons.go:69] Setting default-storageclass=true in profile "embed-certs-831240"
	I0731 21:00:00.738463  187862 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-831240"
	W0731 21:00:00.738475  187862 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:00:00.738481  187862 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-831240"
	I0731 21:00:00.738513  187862 host.go:66] Checking if "embed-certs-831240" exists ...
	I0731 21:00:00.738547  187862 config.go:182] Loaded profile config "embed-certs-831240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:00:00.738581  187862 addons.go:69] Setting metrics-server=true in profile "embed-certs-831240"
	I0731 21:00:00.738651  187862 addons.go:234] Setting addon metrics-server=true in "embed-certs-831240"
	W0731 21:00:00.738666  187862 addons.go:243] addon metrics-server should already be in state true
	I0731 21:00:00.738735  187862 host.go:66] Checking if "embed-certs-831240" exists ...
	I0731 21:00:00.738818  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.738858  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.738897  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.738960  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.739144  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.739190  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.740244  187862 out.go:177] * Verifying Kubernetes components...
	I0731 21:00:00.746003  187862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:00:00.755735  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35867
	I0731 21:00:00.755773  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38437
	I0731 21:00:00.756268  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.756271  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.756594  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35939
	I0731 21:00:00.756820  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.756847  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.756892  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.756917  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.757069  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.757228  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.757254  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.757458  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetState
	I0731 21:00:00.757638  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.757668  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.757745  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.757774  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.758005  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.758543  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.758586  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.761553  187862 addons.go:234] Setting addon default-storageclass=true in "embed-certs-831240"
	W0731 21:00:00.761587  187862 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:00:00.761618  187862 host.go:66] Checking if "embed-certs-831240" exists ...
	I0731 21:00:00.762018  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.762070  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.775492  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42385
	I0731 21:00:00.776091  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.776712  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.776743  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.776760  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35295
	I0731 21:00:00.777245  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.777402  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.777513  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetState
	I0731 21:00:00.777920  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.777945  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.778185  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32907
	I0731 21:00:00.778393  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.778603  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetState
	I0731 21:00:00.778687  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.779223  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.779243  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.779665  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.779718  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 21:00:00.780231  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.780274  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.780612  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 21:00:00.781947  187862 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:00:00.782994  187862 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 20:59:58.110503  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:00.112109  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:00.784194  187862 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:00:00.784216  187862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:00:00.784240  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 21:00:00.784937  187862 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:00:00.784958  187862 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:00:00.784984  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 21:00:00.788544  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.788947  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 21:00:00.788970  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.789003  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.789127  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 21:00:00.789389  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 21:00:00.789521  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 21:00:00.789548  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.789571  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 21:00:00.789773  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 21:00:00.790126  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 21:00:00.790324  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 21:00:00.790502  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 21:00:00.790663  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 21:00:00.799024  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35647
	I0731 21:00:00.799718  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.800341  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.800360  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.800967  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.801258  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetState
	I0731 21:00:00.803078  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 21:00:00.803555  187862 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:00:00.803571  187862 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:00:00.803591  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 21:00:00.809363  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 21:00:00.809461  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.809492  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 21:00:00.809512  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.809680  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 21:00:00.809858  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 21:00:00.810032  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 21:00:00.933963  187862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:00:00.953572  187862 node_ready.go:35] waiting up to 6m0s for node "embed-certs-831240" to be "Ready" ...
	I0731 21:00:01.036486  187862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:00:01.040636  187862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:00:01.040658  187862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 21:00:01.063384  187862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:00:01.068645  187862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:00:01.068675  187862 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:00:01.090838  187862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:00:01.090861  187862 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:00:01.113173  187862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:00:02.099966  187862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.063427097s)
	I0731 21:00:02.100021  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.100035  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.100080  187862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.036657274s)
	I0731 21:00:02.100129  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.100146  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.100338  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.100441  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.100452  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.100461  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.100580  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.100605  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.100615  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.100623  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.100698  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.100709  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Closing plugin on server side
	I0731 21:00:02.100723  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.100866  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.100875  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Closing plugin on server side
	I0731 21:00:02.100882  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.107654  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.107688  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.107952  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.107968  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.108003  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Closing plugin on server side
	I0731 21:00:02.140031  187862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.026799248s)
	I0731 21:00:02.140100  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.140116  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.140424  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Closing plugin on server side
	I0731 21:00:02.140455  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.140470  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.140482  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.140494  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.140772  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Closing plugin on server side
	I0731 21:00:02.140800  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.140808  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.140817  187862 addons.go:475] Verifying addon metrics-server=true in "embed-certs-831240"
	I0731 21:00:02.142583  187862 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 21:00:02.143787  187862 addons.go:510] duration metric: took 1.405477731s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 20:59:58.910774  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:59.410697  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:59.911233  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:00.411170  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:00.911416  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:01.410979  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:01.911444  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:02.411537  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:02.911216  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:03.411386  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:00.089186  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:02.588315  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:02.610109  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:04.610324  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:02.958162  187862 node_ready.go:53] node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:05.458997  187862 node_ready.go:53] node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:03.910942  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:04.411505  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:04.911485  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:05.410763  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:05.910937  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:06.411216  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:06.910743  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:07.410941  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:07.910922  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:08.410593  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:04.589597  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:07.089475  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:09.090023  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:06.610390  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:09.110758  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:07.958154  187862 node_ready.go:49] node "embed-certs-831240" has status "Ready":"True"
	I0731 21:00:07.958180  187862 node_ready.go:38] duration metric: took 7.004576791s for node "embed-certs-831240" to be "Ready" ...
	I0731 21:00:07.958191  187862 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:00:07.969639  187862 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:07.974704  187862 pod_ready.go:92] pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:07.974733  187862 pod_ready.go:81] duration metric: took 5.064645ms for pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:07.974745  187862 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:09.980566  187862 pod_ready.go:102] pod "etcd-embed-certs-831240" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:10.480476  187862 pod_ready.go:92] pod "etcd-embed-certs-831240" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:10.480501  187862 pod_ready.go:81] duration metric: took 2.505748029s for pod "etcd-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:10.480511  187862 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:10.485850  187862 pod_ready.go:92] pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:10.485873  187862 pod_ready.go:81] duration metric: took 5.353478ms for pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:10.485883  187862 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:08.910788  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:09.410807  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:09.911286  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:10.411372  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:10.910748  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:11.411253  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:11.910807  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:12.411208  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:12.910887  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:13.411318  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:11.589454  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:14.090483  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:11.610842  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:14.110306  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:12.492346  187862 pod_ready.go:102] pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:13.991859  187862 pod_ready.go:92] pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:13.991884  187862 pod_ready.go:81] duration metric: took 3.505993775s for pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:13.991893  187862 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x662j" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:13.997932  187862 pod_ready.go:92] pod "kube-proxy-x662j" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:13.997961  187862 pod_ready.go:81] duration metric: took 6.060225ms for pod "kube-proxy-x662j" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:13.997974  187862 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:14.007155  187862 pod_ready.go:92] pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:14.007178  187862 pod_ready.go:81] duration metric: took 9.197289ms for pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:14.007187  187862 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:16.013417  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:13.910943  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:14.410728  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:14.911343  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:15.410545  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:15.910560  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:16.411117  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:16.910537  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:17.410761  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:17.910796  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:18.411138  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:16.589010  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:18.589215  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:16.609886  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:18.610209  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:20.611613  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:18.013504  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:20.513116  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:18.911394  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:19.411098  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:19.910629  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:20.410698  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:20.910760  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:21.410503  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:21.910582  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:22.410724  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:22.910792  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:23.410961  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:21.089938  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:23.588082  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:23.109996  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:25.110361  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:22.514254  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:24.514729  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:27.013263  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:23.910510  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:24.410725  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:24.910807  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:25.411543  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:25.911473  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:26.410494  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:26.910519  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:27.410950  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:27.911528  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:28.411350  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:25.589873  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:27.590134  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:27.612311  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:30.110116  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:29.014386  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:31.014534  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:28.911371  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:29.411269  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:29.911465  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:30.410633  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:30.911166  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:31.411184  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:31.910806  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:32.410806  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:32.911125  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:33.410942  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:33.411021  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:33.461204  188656 cri.go:89] found id: ""
	I0731 21:00:33.461232  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.461241  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:33.461249  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:33.461313  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:33.500898  188656 cri.go:89] found id: ""
	I0731 21:00:33.500927  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.500937  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:33.500944  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:33.501010  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:33.536865  188656 cri.go:89] found id: ""
	I0731 21:00:33.536889  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.536902  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:33.536908  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:33.536957  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:33.578540  188656 cri.go:89] found id: ""
	I0731 21:00:33.578570  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.578582  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:33.578595  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:33.578686  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:33.616242  188656 cri.go:89] found id: ""
	I0731 21:00:33.616266  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.616276  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:33.616283  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:33.616345  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:33.650436  188656 cri.go:89] found id: ""
	I0731 21:00:33.650468  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.650479  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:33.650487  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:33.650552  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:33.687256  188656 cri.go:89] found id: ""
	I0731 21:00:33.687288  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.687300  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:33.687308  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:33.687365  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:33.720381  188656 cri.go:89] found id: ""
	I0731 21:00:33.720428  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.720440  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:33.720453  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:33.720469  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:33.772182  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:33.772226  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:33.787323  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:33.787359  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:00:30.089778  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:32.587877  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:32.110769  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:34.610418  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:33.514142  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:36.013676  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	W0731 21:00:33.907858  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:33.907878  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:33.907892  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:33.974118  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:33.974157  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:36.513427  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:36.527531  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:36.527588  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:36.567679  188656 cri.go:89] found id: ""
	I0731 21:00:36.567706  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.567714  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:36.567726  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:36.567786  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:36.608106  188656 cri.go:89] found id: ""
	I0731 21:00:36.608134  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.608145  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:36.608153  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:36.608215  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:36.651783  188656 cri.go:89] found id: ""
	I0731 21:00:36.651815  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.651824  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:36.651830  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:36.651892  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:36.686716  188656 cri.go:89] found id: ""
	I0731 21:00:36.686743  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.686751  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:36.686758  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:36.686823  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:36.721823  188656 cri.go:89] found id: ""
	I0731 21:00:36.721857  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.721865  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:36.721871  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:36.721939  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:36.758060  188656 cri.go:89] found id: ""
	I0731 21:00:36.758093  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.758103  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:36.758112  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:36.758173  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:36.801667  188656 cri.go:89] found id: ""
	I0731 21:00:36.801694  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.801704  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:36.801712  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:36.801776  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:36.845084  188656 cri.go:89] found id: ""
	I0731 21:00:36.845113  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.845124  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:36.845137  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:36.845152  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:36.897208  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:36.897248  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:36.910716  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:36.910750  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:36.987259  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:36.987285  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:36.987304  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:37.061109  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:37.061144  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:34.589416  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:36.592841  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:39.088346  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:36.611386  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:39.111149  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:38.516701  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:41.017409  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:39.600847  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:39.615897  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:39.615957  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:39.655390  188656 cri.go:89] found id: ""
	I0731 21:00:39.655417  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.655424  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:39.655430  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:39.655502  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:39.694180  188656 cri.go:89] found id: ""
	I0731 21:00:39.694213  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.694224  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:39.694231  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:39.694300  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:39.736752  188656 cri.go:89] found id: ""
	I0731 21:00:39.736783  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.736793  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:39.736801  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:39.736860  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:39.775685  188656 cri.go:89] found id: ""
	I0731 21:00:39.775770  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.775790  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:39.775802  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:39.775871  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:39.816790  188656 cri.go:89] found id: ""
	I0731 21:00:39.816820  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.816829  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:39.816835  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:39.816886  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:39.854931  188656 cri.go:89] found id: ""
	I0731 21:00:39.854963  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.854973  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:39.854981  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:39.855045  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:39.891039  188656 cri.go:89] found id: ""
	I0731 21:00:39.891066  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.891074  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:39.891083  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:39.891136  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:39.927434  188656 cri.go:89] found id: ""
	I0731 21:00:39.927463  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.927473  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:39.927483  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:39.927496  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:39.941240  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:39.941272  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:40.017212  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:40.017233  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:40.017246  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:40.094047  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:40.094081  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:40.138940  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:40.138966  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:42.690818  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:42.704855  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:42.704931  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:42.752315  188656 cri.go:89] found id: ""
	I0731 21:00:42.752347  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.752368  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:42.752376  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:42.752445  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:42.790060  188656 cri.go:89] found id: ""
	I0731 21:00:42.790090  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.790101  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:42.790109  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:42.790220  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:42.825504  188656 cri.go:89] found id: ""
	I0731 21:00:42.825532  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.825540  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:42.825547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:42.825598  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:42.860157  188656 cri.go:89] found id: ""
	I0731 21:00:42.860193  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.860204  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:42.860213  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:42.860286  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:42.902914  188656 cri.go:89] found id: ""
	I0731 21:00:42.902947  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.902959  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:42.902967  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:42.903036  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:42.950503  188656 cri.go:89] found id: ""
	I0731 21:00:42.950532  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.950541  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:42.950550  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:42.950603  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:43.010232  188656 cri.go:89] found id: ""
	I0731 21:00:43.010261  188656 logs.go:276] 0 containers: []
	W0731 21:00:43.010272  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:43.010280  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:43.010344  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:43.045487  188656 cri.go:89] found id: ""
	I0731 21:00:43.045517  188656 logs.go:276] 0 containers: []
	W0731 21:00:43.045527  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:43.045539  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:43.045556  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:43.123248  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:43.123279  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:43.123296  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:43.212230  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:43.212272  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:43.254595  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:43.254626  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:43.306187  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:43.306227  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:41.589806  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:44.088126  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:41.611786  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:44.109436  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:43.513500  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:45.514161  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:45.820246  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:45.835707  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:45.835786  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:45.872079  188656 cri.go:89] found id: ""
	I0731 21:00:45.872110  188656 logs.go:276] 0 containers: []
	W0731 21:00:45.872122  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:45.872130  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:45.872196  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:45.910637  188656 cri.go:89] found id: ""
	I0731 21:00:45.910664  188656 logs.go:276] 0 containers: []
	W0731 21:00:45.910672  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:45.910678  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:45.910740  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:45.945316  188656 cri.go:89] found id: ""
	I0731 21:00:45.945360  188656 logs.go:276] 0 containers: []
	W0731 21:00:45.945372  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:45.945380  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:45.945455  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:45.982015  188656 cri.go:89] found id: ""
	I0731 21:00:45.982046  188656 logs.go:276] 0 containers: []
	W0731 21:00:45.982057  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:45.982096  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:45.982165  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:46.017359  188656 cri.go:89] found id: ""
	I0731 21:00:46.017392  188656 logs.go:276] 0 containers: []
	W0731 21:00:46.017404  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:46.017412  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:46.017478  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:46.054401  188656 cri.go:89] found id: ""
	I0731 21:00:46.054431  188656 logs.go:276] 0 containers: []
	W0731 21:00:46.054447  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:46.054454  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:46.054507  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:46.092107  188656 cri.go:89] found id: ""
	I0731 21:00:46.092130  188656 logs.go:276] 0 containers: []
	W0731 21:00:46.092137  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:46.092143  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:46.092190  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:46.128613  188656 cri.go:89] found id: ""
	I0731 21:00:46.128642  188656 logs.go:276] 0 containers: []
	W0731 21:00:46.128652  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:46.128665  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:46.128679  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:46.144539  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:46.144570  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:46.219399  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:46.219433  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:46.219448  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:46.304486  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:46.304529  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:46.344087  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:46.344121  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:46.090543  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:48.090607  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:46.111072  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:48.610316  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:50.611553  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:48.014287  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:50.513252  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:48.894728  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:48.916610  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:48.916675  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:48.978515  188656 cri.go:89] found id: ""
	I0731 21:00:48.978543  188656 logs.go:276] 0 containers: []
	W0731 21:00:48.978550  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:48.978557  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:48.978615  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:49.026224  188656 cri.go:89] found id: ""
	I0731 21:00:49.026257  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.026268  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:49.026276  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:49.026354  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:49.064967  188656 cri.go:89] found id: ""
	I0731 21:00:49.064994  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.065003  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:49.065010  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:49.065070  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:49.101966  188656 cri.go:89] found id: ""
	I0731 21:00:49.101990  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.101999  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:49.102004  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:49.102056  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:49.137775  188656 cri.go:89] found id: ""
	I0731 21:00:49.137801  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.137809  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:49.137815  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:49.137867  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:49.173778  188656 cri.go:89] found id: ""
	I0731 21:00:49.173824  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.173832  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:49.173839  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:49.173908  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:49.207211  188656 cri.go:89] found id: ""
	I0731 21:00:49.207239  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.207247  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:49.207254  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:49.207333  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:49.244126  188656 cri.go:89] found id: ""
	I0731 21:00:49.244159  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.244180  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:49.244202  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:49.244221  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:49.299606  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:49.299646  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:49.314093  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:49.314121  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:49.384691  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:49.384712  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:49.384728  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:49.464425  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:49.464462  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:52.005670  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:52.019617  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:52.019705  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:52.053452  188656 cri.go:89] found id: ""
	I0731 21:00:52.053485  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.053494  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:52.053500  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:52.053552  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:52.094462  188656 cri.go:89] found id: ""
	I0731 21:00:52.094495  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.094504  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:52.094510  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:52.094572  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:52.134555  188656 cri.go:89] found id: ""
	I0731 21:00:52.134584  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.134595  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:52.134602  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:52.134676  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:52.168805  188656 cri.go:89] found id: ""
	I0731 21:00:52.168851  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.168863  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:52.168871  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:52.168939  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:52.203093  188656 cri.go:89] found id: ""
	I0731 21:00:52.203121  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.203132  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:52.203140  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:52.203213  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:52.237816  188656 cri.go:89] found id: ""
	I0731 21:00:52.237842  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.237850  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:52.237857  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:52.237906  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:52.272136  188656 cri.go:89] found id: ""
	I0731 21:00:52.272175  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.272194  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:52.272202  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:52.272261  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:52.306616  188656 cri.go:89] found id: ""
	I0731 21:00:52.306641  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.306649  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:52.306659  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:52.306671  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:52.372668  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:52.372690  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:52.372707  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:52.457752  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:52.457794  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:52.496087  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:52.496129  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:52.548137  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:52.548176  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:50.588204  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:53.089737  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:53.110034  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:55.110293  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:52.514848  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:55.013623  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:57.015221  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:55.063463  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:55.076922  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:55.077005  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:55.117479  188656 cri.go:89] found id: ""
	I0731 21:00:55.117511  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.117523  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:55.117531  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:55.117595  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:55.156311  188656 cri.go:89] found id: ""
	I0731 21:00:55.156339  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.156348  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:55.156354  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:55.156421  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:55.196778  188656 cri.go:89] found id: ""
	I0731 21:00:55.196807  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.196818  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:55.196826  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:55.196898  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:55.237575  188656 cri.go:89] found id: ""
	I0731 21:00:55.237605  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.237614  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:55.237620  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:55.237672  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:55.271717  188656 cri.go:89] found id: ""
	I0731 21:00:55.271746  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.271754  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:55.271760  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:55.271811  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:55.307586  188656 cri.go:89] found id: ""
	I0731 21:00:55.307618  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.307630  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:55.307637  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:55.307708  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:55.343325  188656 cri.go:89] found id: ""
	I0731 21:00:55.343352  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.343361  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:55.343367  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:55.343418  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:55.378959  188656 cri.go:89] found id: ""
	I0731 21:00:55.378988  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.378997  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:55.379008  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:55.379021  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:55.454213  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:55.454243  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:55.454260  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:55.532802  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:55.532839  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:55.575903  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:55.575940  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:55.635105  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:55.635140  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:58.149801  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:58.162682  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:58.162743  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:58.196220  188656 cri.go:89] found id: ""
	I0731 21:00:58.196245  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.196254  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:58.196260  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:58.196313  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:58.231052  188656 cri.go:89] found id: ""
	I0731 21:00:58.231083  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.231093  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:58.231099  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:58.231156  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:58.265569  188656 cri.go:89] found id: ""
	I0731 21:00:58.265599  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.265612  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:58.265633  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:58.265695  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:58.300750  188656 cri.go:89] found id: ""
	I0731 21:00:58.300779  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.300788  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:58.300793  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:58.300869  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:58.333920  188656 cri.go:89] found id: ""
	I0731 21:00:58.333949  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.333958  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:58.333963  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:58.334015  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:58.368732  188656 cri.go:89] found id: ""
	I0731 21:00:58.368759  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.368771  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:58.368787  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:58.368855  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:58.408454  188656 cri.go:89] found id: ""
	I0731 21:00:58.408488  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.408501  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:58.408510  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:58.408575  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:58.445855  188656 cri.go:89] found id: ""
	I0731 21:00:58.445888  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.445900  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:58.445913  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:58.445934  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:58.496144  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:58.496177  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:58.510708  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:58.510743  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:58.580690  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:58.580712  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:58.580725  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:58.657281  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:58.657320  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:55.591068  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:58.088264  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:57.610282  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:59.611376  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:59.017831  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:01.514115  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:01.196374  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:01.209044  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:01.209111  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:01.247313  188656 cri.go:89] found id: ""
	I0731 21:01:01.247343  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.247353  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:01.247360  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:01.247443  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:01.282269  188656 cri.go:89] found id: ""
	I0731 21:01:01.282300  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.282308  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:01.282314  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:01.282370  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:01.315598  188656 cri.go:89] found id: ""
	I0731 21:01:01.315628  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.315638  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:01.315644  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:01.315697  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:01.352492  188656 cri.go:89] found id: ""
	I0731 21:01:01.352521  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.352533  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:01.352540  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:01.352605  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:01.387858  188656 cri.go:89] found id: ""
	I0731 21:01:01.387885  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.387894  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:01.387900  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:01.387950  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:01.425014  188656 cri.go:89] found id: ""
	I0731 21:01:01.425042  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.425052  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:01.425061  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:01.425129  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:01.463068  188656 cri.go:89] found id: ""
	I0731 21:01:01.463098  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.463107  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:01.463113  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:01.463171  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:01.500174  188656 cri.go:89] found id: ""
	I0731 21:01:01.500203  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.500214  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:01.500229  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:01.500244  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:01.554350  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:01.554389  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:01.569353  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:01.569394  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:01.641074  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:01.641095  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:01.641108  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:01.722340  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:01.722377  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:00.088915  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:02.089981  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:02.109888  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:04.109951  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:04.015302  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:06.513535  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:04.264035  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:04.278374  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:04.278441  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:04.314037  188656 cri.go:89] found id: ""
	I0731 21:01:04.314068  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.314079  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:04.314087  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:04.314159  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:04.347604  188656 cri.go:89] found id: ""
	I0731 21:01:04.347635  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.347646  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:04.347653  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:04.347718  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:04.382412  188656 cri.go:89] found id: ""
	I0731 21:01:04.382442  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.382454  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:04.382462  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:04.382516  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:04.419097  188656 cri.go:89] found id: ""
	I0731 21:01:04.419130  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.419142  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:04.419150  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:04.419209  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:04.464561  188656 cri.go:89] found id: ""
	I0731 21:01:04.464592  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.464601  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:04.464607  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:04.464683  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:04.500484  188656 cri.go:89] found id: ""
	I0731 21:01:04.500510  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.500518  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:04.500524  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:04.500577  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:04.536211  188656 cri.go:89] found id: ""
	I0731 21:01:04.536239  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.536250  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:04.536257  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:04.536324  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:04.569521  188656 cri.go:89] found id: ""
	I0731 21:01:04.569548  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.569556  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:04.569567  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:04.569583  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:04.621228  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:04.621261  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:04.637500  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:04.637527  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:04.710577  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:04.710606  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:04.710623  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:04.788305  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:04.788343  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:07.329209  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:07.343021  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:07.343089  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:07.378556  188656 cri.go:89] found id: ""
	I0731 21:01:07.378588  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.378603  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:07.378610  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:07.378679  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:07.416419  188656 cri.go:89] found id: ""
	I0731 21:01:07.416455  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.416467  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:07.416474  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:07.416538  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:07.454720  188656 cri.go:89] found id: ""
	I0731 21:01:07.454749  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.454758  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:07.454764  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:07.454815  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:07.488963  188656 cri.go:89] found id: ""
	I0731 21:01:07.488995  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.489004  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:07.489009  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:07.489060  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:07.531916  188656 cri.go:89] found id: ""
	I0731 21:01:07.531949  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.531961  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:07.531967  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:07.532019  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:07.569233  188656 cri.go:89] found id: ""
	I0731 21:01:07.569266  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.569275  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:07.569281  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:07.569350  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:07.606318  188656 cri.go:89] found id: ""
	I0731 21:01:07.606349  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.606360  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:07.606368  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:07.606442  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:07.641408  188656 cri.go:89] found id: ""
	I0731 21:01:07.641436  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.641445  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:07.641454  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:07.641466  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:07.681094  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:07.681123  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:07.734600  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:07.734641  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:07.748747  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:07.748779  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:07.821775  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:07.821799  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:07.821816  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:04.590174  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:07.089655  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:06.110694  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:08.610381  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:10.611128  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:09.013688  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:11.513361  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:10.399973  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:10.412908  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:10.412986  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:10.448866  188656 cri.go:89] found id: ""
	I0731 21:01:10.448895  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.448903  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:10.448909  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:10.448966  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:10.486309  188656 cri.go:89] found id: ""
	I0731 21:01:10.486338  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.486346  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:10.486352  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:10.486411  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:10.522834  188656 cri.go:89] found id: ""
	I0731 21:01:10.522856  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.522863  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:10.522870  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:10.522929  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:10.558272  188656 cri.go:89] found id: ""
	I0731 21:01:10.558304  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.558324  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:10.558330  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:10.558391  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:10.596560  188656 cri.go:89] found id: ""
	I0731 21:01:10.596589  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.596600  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:10.596608  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:10.596668  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:10.633488  188656 cri.go:89] found id: ""
	I0731 21:01:10.633518  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.633529  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:10.633537  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:10.633597  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:10.665779  188656 cri.go:89] found id: ""
	I0731 21:01:10.665812  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.665824  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:10.665832  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:10.665895  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:10.700526  188656 cri.go:89] found id: ""
	I0731 21:01:10.700556  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.700564  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:10.700575  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:10.700587  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:10.753507  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:10.753550  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:10.768056  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:10.768089  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:10.842120  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:10.842142  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:10.842159  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:10.916532  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:10.916565  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:13.456826  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:13.471064  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:13.471130  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:13.505660  188656 cri.go:89] found id: ""
	I0731 21:01:13.505694  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.505707  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:13.505713  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:13.505775  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:13.543084  188656 cri.go:89] found id: ""
	I0731 21:01:13.543109  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.543117  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:13.543123  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:13.543182  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:13.578940  188656 cri.go:89] found id: ""
	I0731 21:01:13.578966  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.578974  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:13.578981  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:13.579047  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:13.617710  188656 cri.go:89] found id: ""
	I0731 21:01:13.617733  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.617740  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:13.617747  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:13.617810  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:13.653535  188656 cri.go:89] found id: ""
	I0731 21:01:13.653567  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.653579  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:13.653587  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:13.653658  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:13.687914  188656 cri.go:89] found id: ""
	I0731 21:01:13.687942  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.687953  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:13.687960  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:13.688031  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:13.725242  188656 cri.go:89] found id: ""
	I0731 21:01:13.725278  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.725287  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:13.725293  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:13.725372  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:13.760890  188656 cri.go:89] found id: ""
	I0731 21:01:13.760918  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.760929  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:13.760943  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:13.760958  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:13.810212  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:13.810252  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:13.824229  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:13.824259  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:01:09.588945  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:12.088514  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:14.088684  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:13.109760  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:15.109938  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:13.515603  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:16.013268  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	W0731 21:01:13.895306  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:13.895331  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:13.895344  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:13.976366  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:13.976411  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:16.520165  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:16.533970  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:16.534035  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:16.571444  188656 cri.go:89] found id: ""
	I0731 21:01:16.571474  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.571482  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:16.571488  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:16.571539  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:16.608150  188656 cri.go:89] found id: ""
	I0731 21:01:16.608176  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.608186  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:16.608194  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:16.608254  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:16.643252  188656 cri.go:89] found id: ""
	I0731 21:01:16.643283  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.643294  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:16.643302  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:16.643363  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:16.679521  188656 cri.go:89] found id: ""
	I0731 21:01:16.679552  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.679563  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:16.679571  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:16.679624  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:16.713502  188656 cri.go:89] found id: ""
	I0731 21:01:16.713532  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.713541  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:16.713547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:16.713624  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:16.748276  188656 cri.go:89] found id: ""
	I0731 21:01:16.748309  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.748318  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:16.748324  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:16.748383  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:16.783895  188656 cri.go:89] found id: ""
	I0731 21:01:16.783929  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.783940  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:16.783948  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:16.784014  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:16.817362  188656 cri.go:89] found id: ""
	I0731 21:01:16.817392  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.817415  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:16.817425  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:16.817440  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:16.872584  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:16.872637  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:16.887240  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:16.887275  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:16.961920  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:16.961949  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:16.961967  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:17.041889  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:17.041924  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:16.089420  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:18.089611  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:17.110442  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:19.111424  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:18.013772  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:20.514737  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:19.585935  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:19.600389  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:19.600475  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:19.635883  188656 cri.go:89] found id: ""
	I0731 21:01:19.635913  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.635924  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:19.635932  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:19.635995  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:19.674413  188656 cri.go:89] found id: ""
	I0731 21:01:19.674441  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.674459  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:19.674471  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:19.674538  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:19.708181  188656 cri.go:89] found id: ""
	I0731 21:01:19.708211  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.708219  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:19.708224  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:19.708292  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:19.744737  188656 cri.go:89] found id: ""
	I0731 21:01:19.744774  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.744783  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:19.744791  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:19.744849  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:19.784366  188656 cri.go:89] found id: ""
	I0731 21:01:19.784398  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.784406  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:19.784412  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:19.784465  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:19.819234  188656 cri.go:89] found id: ""
	I0731 21:01:19.819269  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.819280  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:19.819289  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:19.819355  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:19.851462  188656 cri.go:89] found id: ""
	I0731 21:01:19.851494  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.851503  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:19.851510  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:19.851563  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:19.896575  188656 cri.go:89] found id: ""
	I0731 21:01:19.896604  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.896612  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:19.896624  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:19.896640  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:19.952239  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:19.952284  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:19.969411  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:19.969442  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:20.042820  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:20.042847  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:20.042863  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:20.130070  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:20.130115  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:22.674956  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:22.688548  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:22.688616  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:22.728750  188656 cri.go:89] found id: ""
	I0731 21:01:22.728775  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.728784  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:22.728790  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:22.728844  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:22.763765  188656 cri.go:89] found id: ""
	I0731 21:01:22.763793  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.763801  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:22.763807  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:22.763858  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:22.799134  188656 cri.go:89] found id: ""
	I0731 21:01:22.799163  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.799172  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:22.799178  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:22.799237  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:22.833972  188656 cri.go:89] found id: ""
	I0731 21:01:22.833998  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.834005  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:22.834011  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:22.834060  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:22.869686  188656 cri.go:89] found id: ""
	I0731 21:01:22.869711  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.869719  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:22.869724  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:22.869776  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:22.907919  188656 cri.go:89] found id: ""
	I0731 21:01:22.907950  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.907961  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:22.907969  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:22.908035  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:22.947162  188656 cri.go:89] found id: ""
	I0731 21:01:22.947192  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.947204  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:22.947212  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:22.947273  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:22.992822  188656 cri.go:89] found id: ""
	I0731 21:01:22.992860  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.992872  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:22.992884  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:22.992900  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:23.045552  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:23.045589  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:23.059895  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:23.059925  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:23.135535  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:23.135561  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:23.135577  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:23.217468  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:23.217521  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:20.588507  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:22.588759  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:21.611467  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:24.110813  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:22.514805  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:25.012583  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:27.013095  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:25.771615  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:25.785037  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:25.785115  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:25.821070  188656 cri.go:89] found id: ""
	I0731 21:01:25.821100  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.821112  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:25.821120  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:25.821176  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:25.856174  188656 cri.go:89] found id: ""
	I0731 21:01:25.856206  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.856217  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:25.856225  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:25.856288  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:25.889440  188656 cri.go:89] found id: ""
	I0731 21:01:25.889473  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.889483  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:25.889490  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:25.889546  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:25.924770  188656 cri.go:89] found id: ""
	I0731 21:01:25.924796  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.924804  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:25.924811  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:25.924860  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:25.963529  188656 cri.go:89] found id: ""
	I0731 21:01:25.963576  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.963588  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:25.963595  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:25.963670  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:26.000033  188656 cri.go:89] found id: ""
	I0731 21:01:26.000060  188656 logs.go:276] 0 containers: []
	W0731 21:01:26.000069  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:26.000076  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:26.000133  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:26.035310  188656 cri.go:89] found id: ""
	I0731 21:01:26.035341  188656 logs.go:276] 0 containers: []
	W0731 21:01:26.035353  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:26.035359  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:26.035423  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:26.070096  188656 cri.go:89] found id: ""
	I0731 21:01:26.070119  188656 logs.go:276] 0 containers: []
	W0731 21:01:26.070127  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:26.070138  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:26.070149  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:26.141198  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:26.141220  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:26.141237  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:26.219766  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:26.219805  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:26.264836  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:26.264864  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:26.316672  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:26.316709  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:28.832882  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:24.588907  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:27.088961  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:29.089538  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:26.111336  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:28.609453  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:30.610379  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:29.014929  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:31.512827  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:28.846243  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:28.846307  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:28.880312  188656 cri.go:89] found id: ""
	I0731 21:01:28.880339  188656 logs.go:276] 0 containers: []
	W0731 21:01:28.880350  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:28.880358  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:28.880419  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:28.914625  188656 cri.go:89] found id: ""
	I0731 21:01:28.914652  188656 logs.go:276] 0 containers: []
	W0731 21:01:28.914660  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:28.914667  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:28.914726  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:28.949138  188656 cri.go:89] found id: ""
	I0731 21:01:28.949173  188656 logs.go:276] 0 containers: []
	W0731 21:01:28.949185  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:28.949192  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:28.949264  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:28.985229  188656 cri.go:89] found id: ""
	I0731 21:01:28.985258  188656 logs.go:276] 0 containers: []
	W0731 21:01:28.985266  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:28.985272  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:28.985326  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:29.021520  188656 cri.go:89] found id: ""
	I0731 21:01:29.021550  188656 logs.go:276] 0 containers: []
	W0731 21:01:29.021562  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:29.021568  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:29.021629  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:29.058639  188656 cri.go:89] found id: ""
	I0731 21:01:29.058671  188656 logs.go:276] 0 containers: []
	W0731 21:01:29.058682  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:29.058690  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:29.058755  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:29.105435  188656 cri.go:89] found id: ""
	I0731 21:01:29.105458  188656 logs.go:276] 0 containers: []
	W0731 21:01:29.105466  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:29.105472  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:29.105528  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:29.147118  188656 cri.go:89] found id: ""
	I0731 21:01:29.147144  188656 logs.go:276] 0 containers: []
	W0731 21:01:29.147152  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:29.147161  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:29.147177  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:29.231698  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:29.231735  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:29.276163  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:29.276200  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:29.330551  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:29.330589  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:29.350293  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:29.350323  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:29.456073  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:31.956964  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:31.970712  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:31.970780  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:32.009546  188656 cri.go:89] found id: ""
	I0731 21:01:32.009574  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.009585  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:32.009593  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:32.009674  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:32.046622  188656 cri.go:89] found id: ""
	I0731 21:01:32.046661  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.046672  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:32.046680  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:32.046748  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:32.080958  188656 cri.go:89] found id: ""
	I0731 21:01:32.080985  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.080993  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:32.080998  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:32.081052  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:32.117454  188656 cri.go:89] found id: ""
	I0731 21:01:32.117480  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.117489  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:32.117495  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:32.117561  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:32.152335  188656 cri.go:89] found id: ""
	I0731 21:01:32.152369  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.152380  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:32.152387  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:32.152441  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:32.186631  188656 cri.go:89] found id: ""
	I0731 21:01:32.186670  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.186682  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:32.186691  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:32.186761  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:32.221496  188656 cri.go:89] found id: ""
	I0731 21:01:32.221533  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.221544  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:32.221551  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:32.221632  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:32.256315  188656 cri.go:89] found id: ""
	I0731 21:01:32.256341  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.256350  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:32.256360  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:32.256372  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:32.295759  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:32.295788  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:32.347855  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:32.347888  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:32.360982  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:32.361012  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:32.433900  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:32.433926  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:32.433947  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:31.588474  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:33.590513  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:32.610672  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:35.110698  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:33.514600  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:36.013157  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:35.013369  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:35.027203  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:35.027298  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:35.065567  188656 cri.go:89] found id: ""
	I0731 21:01:35.065599  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.065610  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:35.065617  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:35.065686  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:35.104285  188656 cri.go:89] found id: ""
	I0731 21:01:35.104317  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.104328  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:35.104335  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:35.104430  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:35.151081  188656 cri.go:89] found id: ""
	I0731 21:01:35.151108  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.151119  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:35.151127  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:35.151190  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:35.196844  188656 cri.go:89] found id: ""
	I0731 21:01:35.196875  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.196886  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:35.196894  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:35.196964  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:35.253581  188656 cri.go:89] found id: ""
	I0731 21:01:35.253612  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.253623  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:35.253630  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:35.253703  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:35.295791  188656 cri.go:89] found id: ""
	I0731 21:01:35.295819  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.295830  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:35.295838  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:35.295904  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:35.329405  188656 cri.go:89] found id: ""
	I0731 21:01:35.329441  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.329454  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:35.329462  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:35.329526  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:35.363976  188656 cri.go:89] found id: ""
	I0731 21:01:35.364009  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.364022  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:35.364035  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:35.364051  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:35.421213  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:35.421253  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:35.436612  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:35.436646  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:35.514154  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:35.514182  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:35.514197  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:35.588048  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:35.588082  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:38.133466  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:38.147071  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:38.147142  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:38.179992  188656 cri.go:89] found id: ""
	I0731 21:01:38.180024  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.180036  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:38.180044  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:38.180116  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:38.213784  188656 cri.go:89] found id: ""
	I0731 21:01:38.213816  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.213827  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:38.213834  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:38.213901  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:38.254190  188656 cri.go:89] found id: ""
	I0731 21:01:38.254220  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.254229  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:38.254235  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:38.254284  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:38.289695  188656 cri.go:89] found id: ""
	I0731 21:01:38.289732  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.289743  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:38.289751  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:38.289819  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:38.327743  188656 cri.go:89] found id: ""
	I0731 21:01:38.327777  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.327788  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:38.327797  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:38.327853  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:38.361373  188656 cri.go:89] found id: ""
	I0731 21:01:38.361409  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.361421  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:38.361428  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:38.361501  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:38.396832  188656 cri.go:89] found id: ""
	I0731 21:01:38.396860  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.396868  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:38.396873  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:38.396923  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:38.431822  188656 cri.go:89] found id: ""
	I0731 21:01:38.431855  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.431868  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:38.431880  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:38.431895  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:38.481994  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:38.482028  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:38.495885  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:38.495911  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:38.563384  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:38.563411  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:38.563437  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:38.646806  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:38.646848  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:36.089465  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:38.590301  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:37.611057  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:40.110731  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:38.015769  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:40.513690  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:41.187323  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:41.200995  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:41.201063  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:41.241620  188656 cri.go:89] found id: ""
	I0731 21:01:41.241651  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.241663  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:41.241671  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:41.241745  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:41.279565  188656 cri.go:89] found id: ""
	I0731 21:01:41.279595  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.279604  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:41.279609  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:41.279666  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:41.320710  188656 cri.go:89] found id: ""
	I0731 21:01:41.320744  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.320755  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:41.320763  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:41.320834  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:41.356428  188656 cri.go:89] found id: ""
	I0731 21:01:41.356460  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.356472  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:41.356480  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:41.356544  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:41.390493  188656 cri.go:89] found id: ""
	I0731 21:01:41.390525  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.390536  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:41.390544  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:41.390612  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:41.424244  188656 cri.go:89] found id: ""
	I0731 21:01:41.424271  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.424282  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:41.424290  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:41.424350  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:41.459916  188656 cri.go:89] found id: ""
	I0731 21:01:41.459946  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.459955  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:41.459961  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:41.460012  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:41.493891  188656 cri.go:89] found id: ""
	I0731 21:01:41.493917  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.493926  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:41.493936  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:41.493950  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:41.544066  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:41.544106  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:41.558504  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:41.558534  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:41.632996  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:41.633021  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:41.633039  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:41.712637  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:41.712677  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:41.087979  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:43.088834  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:42.610136  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:45.109986  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:42.514059  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:44.514535  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:47.014970  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:44.255947  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:44.268961  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:44.269050  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:44.304621  188656 cri.go:89] found id: ""
	I0731 21:01:44.304656  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.304668  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:44.304676  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:44.304732  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:44.339389  188656 cri.go:89] found id: ""
	I0731 21:01:44.339429  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.339441  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:44.339448  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:44.339510  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:44.373069  188656 cri.go:89] found id: ""
	I0731 21:01:44.373095  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.373103  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:44.373110  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:44.373179  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:44.408784  188656 cri.go:89] found id: ""
	I0731 21:01:44.408812  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.408821  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:44.408829  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:44.408896  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:44.445636  188656 cri.go:89] found id: ""
	I0731 21:01:44.445671  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.445682  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:44.445690  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:44.445759  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:44.483529  188656 cri.go:89] found id: ""
	I0731 21:01:44.483565  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.483577  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:44.483585  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:44.483643  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:44.517959  188656 cri.go:89] found id: ""
	I0731 21:01:44.517980  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.517987  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:44.517993  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:44.518042  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:44.552322  188656 cri.go:89] found id: ""
	I0731 21:01:44.552367  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.552392  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:44.552405  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:44.552421  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:44.625005  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:44.625030  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:44.625043  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:44.702547  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:44.702585  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:44.741754  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:44.741792  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:44.795179  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:44.795216  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:47.309995  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:47.323993  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:47.324076  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:47.365546  188656 cri.go:89] found id: ""
	I0731 21:01:47.365576  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.365587  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:47.365595  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:47.365682  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:47.402774  188656 cri.go:89] found id: ""
	I0731 21:01:47.402810  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.402822  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:47.402831  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:47.402899  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:47.440716  188656 cri.go:89] found id: ""
	I0731 21:01:47.440746  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.440755  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:47.440761  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:47.440811  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:47.479418  188656 cri.go:89] found id: ""
	I0731 21:01:47.479450  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.479461  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:47.479469  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:47.479535  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:47.514027  188656 cri.go:89] found id: ""
	I0731 21:01:47.514065  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.514074  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:47.514081  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:47.514149  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:47.550178  188656 cri.go:89] found id: ""
	I0731 21:01:47.550203  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.550212  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:47.550218  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:47.550271  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:47.587844  188656 cri.go:89] found id: ""
	I0731 21:01:47.587873  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.587883  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:47.587891  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:47.587945  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:47.627581  188656 cri.go:89] found id: ""
	I0731 21:01:47.627608  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.627620  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:47.627633  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:47.627647  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:47.683364  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:47.683408  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:47.697882  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:47.697917  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:47.773804  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:47.773834  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:47.773848  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:47.859356  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:47.859404  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:45.090199  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:47.091328  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:47.610631  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:50.109476  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:49.514186  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:52.013486  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:50.402403  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:50.417269  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:50.417332  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:50.452762  188656 cri.go:89] found id: ""
	I0731 21:01:50.452786  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.452793  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:50.452799  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:50.452852  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:50.486741  188656 cri.go:89] found id: ""
	I0731 21:01:50.486771  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.486782  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:50.486789  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:50.486855  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:50.526144  188656 cri.go:89] found id: ""
	I0731 21:01:50.526174  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.526185  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:50.526193  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:50.526246  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:50.560957  188656 cri.go:89] found id: ""
	I0731 21:01:50.560985  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.560995  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:50.561003  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:50.561065  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:50.597228  188656 cri.go:89] found id: ""
	I0731 21:01:50.597258  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.597269  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:50.597275  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:50.597357  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:50.638153  188656 cri.go:89] found id: ""
	I0731 21:01:50.638183  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.638199  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:50.638208  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:50.638270  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:50.672236  188656 cri.go:89] found id: ""
	I0731 21:01:50.672266  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.672274  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:50.672280  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:50.672340  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:50.704069  188656 cri.go:89] found id: ""
	I0731 21:01:50.704093  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.704102  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:50.704112  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:50.704125  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:50.757973  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:50.758010  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:50.771203  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:50.771229  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:50.842937  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:50.842956  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:50.842969  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:50.925819  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:50.925857  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:53.470691  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:53.485260  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:53.485332  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:53.524110  188656 cri.go:89] found id: ""
	I0731 21:01:53.524139  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.524148  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:53.524154  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:53.524215  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:53.557642  188656 cri.go:89] found id: ""
	I0731 21:01:53.557668  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.557676  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:53.557682  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:53.557737  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:53.595594  188656 cri.go:89] found id: ""
	I0731 21:01:53.595622  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.595641  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:53.595647  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:53.595712  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:53.634458  188656 cri.go:89] found id: ""
	I0731 21:01:53.634487  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.634499  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:53.634507  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:53.634567  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:53.674124  188656 cri.go:89] found id: ""
	I0731 21:01:53.674149  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.674157  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:53.674164  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:53.674234  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:53.706861  188656 cri.go:89] found id: ""
	I0731 21:01:53.706888  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.706897  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:53.706903  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:53.706957  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:53.745476  188656 cri.go:89] found id: ""
	I0731 21:01:53.745504  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.745511  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:53.745522  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:53.745575  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:53.780847  188656 cri.go:89] found id: ""
	I0731 21:01:53.780878  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.780889  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:53.780902  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:53.780922  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:01:49.589017  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:52.088587  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:54.088885  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:52.109889  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:54.110634  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:54.014383  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:56.512884  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	W0731 21:01:53.853469  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:53.853497  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:53.853517  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:53.930506  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:53.930544  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:53.975439  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:53.975475  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:54.027903  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:54.027937  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:56.542860  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:56.557744  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:56.557813  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:56.596034  188656 cri.go:89] found id: ""
	I0731 21:01:56.596065  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.596075  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:56.596082  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:56.596146  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:56.631531  188656 cri.go:89] found id: ""
	I0731 21:01:56.631561  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.631572  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:56.631579  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:56.631653  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:56.665824  188656 cri.go:89] found id: ""
	I0731 21:01:56.665853  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.665865  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:56.665872  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:56.665940  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:56.698965  188656 cri.go:89] found id: ""
	I0731 21:01:56.698993  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.699002  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:56.699008  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:56.699074  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:56.735314  188656 cri.go:89] found id: ""
	I0731 21:01:56.735347  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.735359  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:56.735367  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:56.735443  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:56.770350  188656 cri.go:89] found id: ""
	I0731 21:01:56.770383  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.770393  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:56.770402  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:56.770485  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:56.808934  188656 cri.go:89] found id: ""
	I0731 21:01:56.808962  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.808970  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:56.808976  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:56.809027  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:56.845305  188656 cri.go:89] found id: ""
	I0731 21:01:56.845331  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.845354  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:56.845366  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:56.845383  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:56.922810  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:56.922832  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:56.922846  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:56.998009  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:56.998046  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:57.037905  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:57.037934  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:57.092438  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:57.092469  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:56.591334  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:59.089696  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:56.110825  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:58.111013  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:00.111696  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:58.513270  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:00.514474  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:59.608087  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:59.622465  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:59.622537  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:59.660221  188656 cri.go:89] found id: ""
	I0731 21:01:59.660254  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.660265  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:59.660274  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:59.660338  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:59.696158  188656 cri.go:89] found id: ""
	I0731 21:01:59.696193  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.696205  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:59.696213  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:59.696272  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:59.733607  188656 cri.go:89] found id: ""
	I0731 21:01:59.733635  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.733646  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:59.733656  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:59.733727  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:59.770298  188656 cri.go:89] found id: ""
	I0731 21:01:59.770327  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.770336  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:59.770342  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:59.770396  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:59.805630  188656 cri.go:89] found id: ""
	I0731 21:01:59.805659  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.805670  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:59.805682  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:59.805749  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:59.841064  188656 cri.go:89] found id: ""
	I0731 21:01:59.841089  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.841098  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:59.841106  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:59.841166  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:59.877237  188656 cri.go:89] found id: ""
	I0731 21:01:59.877265  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.877274  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:59.877284  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:59.877364  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:59.917102  188656 cri.go:89] found id: ""
	I0731 21:01:59.917138  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.917166  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:59.917179  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:59.917196  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:59.971806  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:59.971846  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:59.986267  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:59.986304  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:00.063185  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:00.063227  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:00.063244  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:00.148498  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:00.148541  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:02.690235  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:02.704623  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:02.704703  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:02.740557  188656 cri.go:89] found id: ""
	I0731 21:02:02.740588  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.740599  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:02.740606  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:02.740667  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:02.776340  188656 cri.go:89] found id: ""
	I0731 21:02:02.776382  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.776391  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:02.776396  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:02.776449  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:02.811645  188656 cri.go:89] found id: ""
	I0731 21:02:02.811673  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.811683  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:02.811691  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:02.811754  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:02.847226  188656 cri.go:89] found id: ""
	I0731 21:02:02.847259  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.847267  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:02.847273  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:02.847326  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:02.885591  188656 cri.go:89] found id: ""
	I0731 21:02:02.885617  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.885626  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:02.885631  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:02.885694  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:02.924250  188656 cri.go:89] found id: ""
	I0731 21:02:02.924281  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.924289  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:02.924296  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:02.924358  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:02.959608  188656 cri.go:89] found id: ""
	I0731 21:02:02.959638  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.959649  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:02.959657  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:02.959731  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:02.998175  188656 cri.go:89] found id: ""
	I0731 21:02:02.998205  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.998215  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:02.998228  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:02.998248  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:03.053320  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:03.053382  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:03.067681  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:03.067711  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:03.145222  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:03.145251  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:03.145270  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:03.228413  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:03.228456  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:01.590197  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:04.087692  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:02.610477  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:05.110544  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:03.016030  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:05.513082  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:05.780407  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:05.793872  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:05.793952  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:05.828940  188656 cri.go:89] found id: ""
	I0731 21:02:05.828971  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.828980  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:05.828987  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:05.829051  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:05.866470  188656 cri.go:89] found id: ""
	I0731 21:02:05.866503  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.866515  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:05.866522  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:05.866594  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:05.904756  188656 cri.go:89] found id: ""
	I0731 21:02:05.904792  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.904807  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:05.904814  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:05.904868  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:05.941534  188656 cri.go:89] found id: ""
	I0731 21:02:05.941564  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.941574  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:05.941581  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:05.941649  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:05.980413  188656 cri.go:89] found id: ""
	I0731 21:02:05.980453  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.980465  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:05.980472  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:05.980563  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:06.023226  188656 cri.go:89] found id: ""
	I0731 21:02:06.023258  188656 logs.go:276] 0 containers: []
	W0731 21:02:06.023269  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:06.023277  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:06.023345  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:06.061098  188656 cri.go:89] found id: ""
	I0731 21:02:06.061130  188656 logs.go:276] 0 containers: []
	W0731 21:02:06.061138  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:06.061145  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:06.061195  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:06.097825  188656 cri.go:89] found id: ""
	I0731 21:02:06.097852  188656 logs.go:276] 0 containers: []
	W0731 21:02:06.097860  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:06.097870  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:06.097883  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:06.149181  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:06.149223  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:06.164610  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:06.164651  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:06.248639  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:06.248666  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:06.248684  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:06.332445  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:06.332486  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:06.089967  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:08.588610  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:07.610691  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:09.611166  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:07.513999  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:09.514554  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:11.516493  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:08.873697  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:08.887632  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:08.887745  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:08.926002  188656 cri.go:89] found id: ""
	I0731 21:02:08.926032  188656 logs.go:276] 0 containers: []
	W0731 21:02:08.926042  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:08.926051  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:08.926117  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:08.962999  188656 cri.go:89] found id: ""
	I0731 21:02:08.963028  188656 logs.go:276] 0 containers: []
	W0731 21:02:08.963039  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:08.963047  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:08.963103  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:09.023016  188656 cri.go:89] found id: ""
	I0731 21:02:09.023043  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.023051  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:09.023057  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:09.023109  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:09.059672  188656 cri.go:89] found id: ""
	I0731 21:02:09.059699  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.059708  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:09.059714  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:09.059774  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:09.097603  188656 cri.go:89] found id: ""
	I0731 21:02:09.097635  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.097645  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:09.097653  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:09.097720  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:09.136210  188656 cri.go:89] found id: ""
	I0731 21:02:09.136240  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.136251  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:09.136259  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:09.136326  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:09.176167  188656 cri.go:89] found id: ""
	I0731 21:02:09.176204  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.176211  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:09.176218  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:09.176277  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:09.214151  188656 cri.go:89] found id: ""
	I0731 21:02:09.214180  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.214189  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:09.214199  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:09.214212  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:09.267579  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:09.267618  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:09.282420  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:09.282445  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:09.354067  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:09.354092  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:09.354111  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:09.433454  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:09.433500  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:11.979715  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:11.993050  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:11.993123  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:12.027731  188656 cri.go:89] found id: ""
	I0731 21:02:12.027759  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.027767  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:12.027773  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:12.027834  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:12.064410  188656 cri.go:89] found id: ""
	I0731 21:02:12.064442  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.064452  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:12.064459  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:12.064525  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:12.101061  188656 cri.go:89] found id: ""
	I0731 21:02:12.101096  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.101107  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:12.101115  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:12.101176  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:12.142240  188656 cri.go:89] found id: ""
	I0731 21:02:12.142271  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.142284  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:12.142292  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:12.142357  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:12.184949  188656 cri.go:89] found id: ""
	I0731 21:02:12.184980  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.184988  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:12.184994  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:12.185064  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:12.226031  188656 cri.go:89] found id: ""
	I0731 21:02:12.226068  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.226080  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:12.226089  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:12.226155  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:12.272880  188656 cri.go:89] found id: ""
	I0731 21:02:12.272913  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.272923  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:12.272931  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:12.272989  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:12.306968  188656 cri.go:89] found id: ""
	I0731 21:02:12.307011  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.307033  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:12.307068  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:12.307090  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:12.359357  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:12.359402  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:12.374817  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:12.374848  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:12.445107  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:12.445128  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:12.445141  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:12.530017  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:12.530058  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:11.088281  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:13.090442  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:12.110720  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:14.611142  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:14.013967  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:16.014021  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:15.070277  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:15.084326  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:15.084411  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:15.123513  188656 cri.go:89] found id: ""
	I0731 21:02:15.123549  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.123562  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:15.123569  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:15.123624  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:15.159855  188656 cri.go:89] found id: ""
	I0731 21:02:15.159888  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.159899  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:15.159908  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:15.159973  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:15.195879  188656 cri.go:89] found id: ""
	I0731 21:02:15.195911  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.195919  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:15.195926  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:15.195986  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:15.231216  188656 cri.go:89] found id: ""
	I0731 21:02:15.231249  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.231258  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:15.231265  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:15.231331  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:15.265711  188656 cri.go:89] found id: ""
	I0731 21:02:15.265740  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.265748  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:15.265754  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:15.265803  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:15.300991  188656 cri.go:89] found id: ""
	I0731 21:02:15.301020  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.301027  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:15.301033  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:15.301083  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:15.338507  188656 cri.go:89] found id: ""
	I0731 21:02:15.338533  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.338542  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:15.338550  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:15.338614  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:15.375540  188656 cri.go:89] found id: ""
	I0731 21:02:15.375583  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.375595  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:15.375606  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:15.375631  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:15.428903  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:15.428946  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:15.444018  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:15.444052  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:15.518807  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:15.518842  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:15.518859  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:15.602655  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:15.602693  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:18.158731  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:18.172861  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:18.172940  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:18.207451  188656 cri.go:89] found id: ""
	I0731 21:02:18.207480  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.207489  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:18.207495  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:18.207555  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:18.244974  188656 cri.go:89] found id: ""
	I0731 21:02:18.245004  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.245013  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:18.245019  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:18.245079  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:18.281589  188656 cri.go:89] found id: ""
	I0731 21:02:18.281622  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.281630  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:18.281637  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:18.281698  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:18.321413  188656 cri.go:89] found id: ""
	I0731 21:02:18.321445  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.321455  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:18.321461  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:18.321526  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:18.360600  188656 cri.go:89] found id: ""
	I0731 21:02:18.360627  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.360639  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:18.360647  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:18.360707  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:18.396312  188656 cri.go:89] found id: ""
	I0731 21:02:18.396344  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.396356  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:18.396364  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:18.396451  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:18.431586  188656 cri.go:89] found id: ""
	I0731 21:02:18.431618  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.431630  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:18.431637  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:18.431711  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:18.472995  188656 cri.go:89] found id: ""
	I0731 21:02:18.473025  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.473035  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:18.473047  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:18.473063  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:18.558826  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:18.558865  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:18.600083  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:18.600110  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:18.657944  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:18.657988  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:18.672860  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:18.672888  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:18.748806  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:15.589795  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:18.088699  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:17.112784  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:19.609312  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:18.513798  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:21.014437  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:21.249418  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:21.263304  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:21.263385  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:21.298591  188656 cri.go:89] found id: ""
	I0731 21:02:21.298624  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.298635  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:21.298643  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:21.298707  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:21.335913  188656 cri.go:89] found id: ""
	I0731 21:02:21.335939  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.335947  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:21.335954  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:21.336011  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:21.378314  188656 cri.go:89] found id: ""
	I0731 21:02:21.378347  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.378359  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:21.378368  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:21.378436  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:21.422707  188656 cri.go:89] found id: ""
	I0731 21:02:21.422738  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.422748  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:21.422757  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:21.422826  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:21.487851  188656 cri.go:89] found id: ""
	I0731 21:02:21.487878  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.487887  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:21.487893  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:21.487946  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:21.528944  188656 cri.go:89] found id: ""
	I0731 21:02:21.528970  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.528981  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:21.528990  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:21.529054  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:21.565091  188656 cri.go:89] found id: ""
	I0731 21:02:21.565118  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.565126  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:21.565132  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:21.565182  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:21.599985  188656 cri.go:89] found id: ""
	I0731 21:02:21.600015  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.600027  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:21.600041  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:21.600057  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:21.652065  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:21.652106  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:21.666497  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:21.666528  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:21.741853  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:21.741893  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:21.741919  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:21.822478  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:21.822517  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:20.089186  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:22.589558  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:21.610996  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:24.111590  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:23.513209  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:25.514400  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:24.363018  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:24.375640  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:24.375704  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:24.411383  188656 cri.go:89] found id: ""
	I0731 21:02:24.411416  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.411427  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:24.411436  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:24.411513  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:24.447536  188656 cri.go:89] found id: ""
	I0731 21:02:24.447565  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.447573  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:24.447578  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:24.447651  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:24.489270  188656 cri.go:89] found id: ""
	I0731 21:02:24.489301  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.489311  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:24.489320  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:24.489398  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:24.527891  188656 cri.go:89] found id: ""
	I0731 21:02:24.527922  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.527932  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:24.527938  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:24.527998  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:24.566854  188656 cri.go:89] found id: ""
	I0731 21:02:24.566886  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.566897  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:24.566904  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:24.566974  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:24.606234  188656 cri.go:89] found id: ""
	I0731 21:02:24.606267  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.606278  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:24.606285  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:24.606357  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:24.642880  188656 cri.go:89] found id: ""
	I0731 21:02:24.642909  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.642921  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:24.642929  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:24.642982  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:24.680069  188656 cri.go:89] found id: ""
	I0731 21:02:24.680101  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.680112  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:24.680124  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:24.680142  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:24.735337  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:24.735378  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:24.749010  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:24.749040  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:24.826406  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:24.826441  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:24.826458  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:24.906995  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:24.907049  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:27.451405  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:27.474178  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:27.474251  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:27.514912  188656 cri.go:89] found id: ""
	I0731 21:02:27.514938  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.514945  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:27.514951  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:27.515007  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:27.552850  188656 cri.go:89] found id: ""
	I0731 21:02:27.552880  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.552890  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:27.552896  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:27.552953  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:27.590468  188656 cri.go:89] found id: ""
	I0731 21:02:27.590496  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.590503  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:27.590509  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:27.590572  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:27.626295  188656 cri.go:89] found id: ""
	I0731 21:02:27.626322  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.626330  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:27.626339  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:27.626391  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:27.662654  188656 cri.go:89] found id: ""
	I0731 21:02:27.662690  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.662701  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:27.662708  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:27.662770  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:27.699528  188656 cri.go:89] found id: ""
	I0731 21:02:27.699558  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.699566  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:27.699572  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:27.699639  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:27.740501  188656 cri.go:89] found id: ""
	I0731 21:02:27.740528  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.740539  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:27.740547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:27.740613  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:27.778919  188656 cri.go:89] found id: ""
	I0731 21:02:27.778954  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.778966  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:27.778980  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:27.778999  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:27.815475  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:27.815500  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:27.866578  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:27.866615  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:27.880799  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:27.880830  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:27.948987  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:27.949014  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:27.949032  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:24.596180  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:27.088624  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:26.610897  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:29.110263  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:28.014828  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:30.514006  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:30.532314  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:30.546245  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:30.546317  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:30.581736  188656 cri.go:89] found id: ""
	I0731 21:02:30.581763  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.581772  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:30.581778  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:30.581837  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:30.618790  188656 cri.go:89] found id: ""
	I0731 21:02:30.618816  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.618824  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:30.618830  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:30.618886  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:30.654504  188656 cri.go:89] found id: ""
	I0731 21:02:30.654530  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.654538  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:30.654544  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:30.654603  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:30.690570  188656 cri.go:89] found id: ""
	I0731 21:02:30.690598  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.690609  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:30.690617  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:30.690683  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:30.739676  188656 cri.go:89] found id: ""
	I0731 21:02:30.739705  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.739715  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:30.739723  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:30.739789  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:30.777860  188656 cri.go:89] found id: ""
	I0731 21:02:30.777891  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.777902  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:30.777911  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:30.777995  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:30.814036  188656 cri.go:89] found id: ""
	I0731 21:02:30.814073  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.814088  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:30.814096  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:30.814168  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:30.847262  188656 cri.go:89] found id: ""
	I0731 21:02:30.847292  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.847304  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:30.847316  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:30.847338  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:30.898556  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:30.898596  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:30.912940  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:30.912974  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:30.987384  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:30.987405  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:30.987419  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:31.071376  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:31.071416  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:33.613677  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:33.628304  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:33.628380  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:33.662932  188656 cri.go:89] found id: ""
	I0731 21:02:33.662965  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.662977  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:33.662985  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:33.663055  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:33.697445  188656 cri.go:89] found id: ""
	I0731 21:02:33.697477  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.697487  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:33.697493  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:33.697553  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:33.734480  188656 cri.go:89] found id: ""
	I0731 21:02:33.734516  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.734527  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:33.734536  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:33.734614  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:33.770069  188656 cri.go:89] found id: ""
	I0731 21:02:33.770095  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.770104  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:33.770111  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:33.770194  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:33.806315  188656 cri.go:89] found id: ""
	I0731 21:02:33.806341  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.806350  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:33.806356  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:33.806408  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:29.592432  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:32.088842  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:34.089378  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:31.112420  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:33.611815  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:33.014022  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:35.014517  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:37.018514  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:33.842747  188656 cri.go:89] found id: ""
	I0731 21:02:33.842775  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.842782  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:33.842789  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:33.842856  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:33.877581  188656 cri.go:89] found id: ""
	I0731 21:02:33.877607  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.877616  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:33.877622  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:33.877682  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:33.913238  188656 cri.go:89] found id: ""
	I0731 21:02:33.913263  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.913271  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:33.913282  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:33.913298  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:33.967112  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:33.967148  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:33.980961  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:33.980994  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:34.054886  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:34.054917  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:34.054939  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:34.143088  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:34.143127  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:36.687110  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:36.700649  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:36.700725  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:36.737796  188656 cri.go:89] found id: ""
	I0731 21:02:36.737829  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.737841  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:36.737849  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:36.737916  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:36.773010  188656 cri.go:89] found id: ""
	I0731 21:02:36.773048  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.773059  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:36.773067  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:36.773136  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:36.813945  188656 cri.go:89] found id: ""
	I0731 21:02:36.813978  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.813988  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:36.813994  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:36.814047  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:36.849826  188656 cri.go:89] found id: ""
	I0731 21:02:36.849860  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.849872  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:36.849880  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:36.849943  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:36.887200  188656 cri.go:89] found id: ""
	I0731 21:02:36.887233  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.887244  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:36.887253  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:36.887391  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:36.922529  188656 cri.go:89] found id: ""
	I0731 21:02:36.922562  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.922573  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:36.922582  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:36.922644  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:36.958119  188656 cri.go:89] found id: ""
	I0731 21:02:36.958154  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.958166  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:36.958174  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:36.958240  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:37.001071  188656 cri.go:89] found id: ""
	I0731 21:02:37.001104  188656 logs.go:276] 0 containers: []
	W0731 21:02:37.001113  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:37.001123  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:37.001136  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:37.041248  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:37.041288  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:37.100519  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:37.100558  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:37.115157  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:37.115188  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:37.191232  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:37.191259  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:37.191277  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:36.588213  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:38.589224  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:36.109307  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:38.110675  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:40.111284  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:39.514052  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:42.013265  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:39.772834  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:39.788137  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:39.788203  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:39.827329  188656 cri.go:89] found id: ""
	I0731 21:02:39.827361  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.827371  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:39.827378  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:39.827458  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:39.864855  188656 cri.go:89] found id: ""
	I0731 21:02:39.864882  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.864889  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:39.864897  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:39.864958  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:39.901955  188656 cri.go:89] found id: ""
	I0731 21:02:39.901981  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.901990  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:39.901996  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:39.902059  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:39.941376  188656 cri.go:89] found id: ""
	I0731 21:02:39.941402  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.941412  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:39.941418  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:39.941473  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:39.975321  188656 cri.go:89] found id: ""
	I0731 21:02:39.975352  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.975364  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:39.975394  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:39.975465  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:40.010106  188656 cri.go:89] found id: ""
	I0731 21:02:40.010136  188656 logs.go:276] 0 containers: []
	W0731 21:02:40.010148  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:40.010157  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:40.010220  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:40.043963  188656 cri.go:89] found id: ""
	I0731 21:02:40.043997  188656 logs.go:276] 0 containers: []
	W0731 21:02:40.044009  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:40.044017  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:40.044089  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:40.079178  188656 cri.go:89] found id: ""
	I0731 21:02:40.079216  188656 logs.go:276] 0 containers: []
	W0731 21:02:40.079224  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:40.079234  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:40.079248  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:40.141115  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:40.141158  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:40.156722  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:40.156758  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:40.233758  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:40.233782  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:40.233797  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:40.317316  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:40.317375  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:42.858649  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:42.872135  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:42.872221  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:42.911966  188656 cri.go:89] found id: ""
	I0731 21:02:42.911998  188656 logs.go:276] 0 containers: []
	W0731 21:02:42.912007  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:42.912014  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:42.912081  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:42.950036  188656 cri.go:89] found id: ""
	I0731 21:02:42.950070  188656 logs.go:276] 0 containers: []
	W0731 21:02:42.950079  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:42.950085  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:42.950138  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:42.987201  188656 cri.go:89] found id: ""
	I0731 21:02:42.987233  188656 logs.go:276] 0 containers: []
	W0731 21:02:42.987245  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:42.987253  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:42.987326  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:43.027250  188656 cri.go:89] found id: ""
	I0731 21:02:43.027285  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.027297  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:43.027306  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:43.027374  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:43.063419  188656 cri.go:89] found id: ""
	I0731 21:02:43.063448  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.063456  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:43.063463  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:43.063527  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:43.101155  188656 cri.go:89] found id: ""
	I0731 21:02:43.101184  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.101193  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:43.101199  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:43.101249  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:43.142633  188656 cri.go:89] found id: ""
	I0731 21:02:43.142658  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.142667  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:43.142675  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:43.142741  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:43.177747  188656 cri.go:89] found id: ""
	I0731 21:02:43.177780  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.177789  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:43.177799  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:43.177813  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:43.228074  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:43.228114  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:43.242132  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:43.242165  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:43.313026  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:43.313054  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:43.313072  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:43.394620  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:43.394663  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:40.589306  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:42.589428  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:42.612236  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:45.110401  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:44.513370  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:46.514350  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:45.937932  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:45.951871  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:45.951964  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:45.987615  188656 cri.go:89] found id: ""
	I0731 21:02:45.987642  188656 logs.go:276] 0 containers: []
	W0731 21:02:45.987650  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:45.987656  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:45.987715  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:46.022632  188656 cri.go:89] found id: ""
	I0731 21:02:46.022659  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.022667  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:46.022674  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:46.022746  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:46.061153  188656 cri.go:89] found id: ""
	I0731 21:02:46.061182  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.061191  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:46.061196  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:46.061246  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:46.099168  188656 cri.go:89] found id: ""
	I0731 21:02:46.099197  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.099206  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:46.099212  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:46.099266  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:46.137269  188656 cri.go:89] found id: ""
	I0731 21:02:46.137300  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.137312  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:46.137321  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:46.137403  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:46.172330  188656 cri.go:89] found id: ""
	I0731 21:02:46.172391  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.172404  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:46.172417  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:46.172489  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:46.213314  188656 cri.go:89] found id: ""
	I0731 21:02:46.213358  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.213370  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:46.213378  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:46.213451  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:46.248663  188656 cri.go:89] found id: ""
	I0731 21:02:46.248697  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.248707  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:46.248719  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:46.248735  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:46.305433  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:46.305472  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:46.319065  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:46.319098  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:46.387025  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:46.387046  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:46.387058  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:46.476721  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:46.476769  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:44.589757  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:47.089954  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:47.112823  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:49.114163  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:49.014193  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:51.014760  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:49.020882  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:49.036502  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:49.036573  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:49.076478  188656 cri.go:89] found id: ""
	I0731 21:02:49.076509  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.076518  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:49.076525  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:49.076578  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:49.116065  188656 cri.go:89] found id: ""
	I0731 21:02:49.116098  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.116106  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:49.116112  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:49.116168  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:49.153237  188656 cri.go:89] found id: ""
	I0731 21:02:49.153274  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.153287  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:49.153295  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:49.153385  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:49.192821  188656 cri.go:89] found id: ""
	I0731 21:02:49.192849  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.192858  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:49.192864  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:49.192918  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:49.230627  188656 cri.go:89] found id: ""
	I0731 21:02:49.230660  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.230671  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:49.230679  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:49.230749  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:49.266575  188656 cri.go:89] found id: ""
	I0731 21:02:49.266603  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.266611  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:49.266617  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:49.266688  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:49.312489  188656 cri.go:89] found id: ""
	I0731 21:02:49.312522  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.312533  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:49.312541  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:49.312613  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:49.348907  188656 cri.go:89] found id: ""
	I0731 21:02:49.348932  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.348941  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:49.348950  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:49.348965  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:49.363229  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:49.363267  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:49.435708  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:49.435732  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:49.435745  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:49.522002  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:49.522047  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:49.566823  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:49.566868  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:52.122660  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:52.136559  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:52.136629  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:52.173198  188656 cri.go:89] found id: ""
	I0731 21:02:52.173227  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.173236  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:52.173242  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:52.173310  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:52.208464  188656 cri.go:89] found id: ""
	I0731 21:02:52.208503  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.208514  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:52.208521  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:52.208590  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:52.246052  188656 cri.go:89] found id: ""
	I0731 21:02:52.246084  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.246091  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:52.246098  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:52.246160  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:52.281798  188656 cri.go:89] found id: ""
	I0731 21:02:52.281831  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.281843  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:52.281852  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:52.281918  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:52.318924  188656 cri.go:89] found id: ""
	I0731 21:02:52.318954  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.318975  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:52.318983  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:52.319052  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:52.356752  188656 cri.go:89] found id: ""
	I0731 21:02:52.356788  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.356800  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:52.356809  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:52.356874  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:52.391507  188656 cri.go:89] found id: ""
	I0731 21:02:52.391537  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.391545  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:52.391551  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:52.391602  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:52.430714  188656 cri.go:89] found id: ""
	I0731 21:02:52.430749  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.430761  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:52.430774  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:52.430792  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:52.482600  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:52.482629  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:52.535317  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:52.535361  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:52.549835  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:52.549874  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:52.628319  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:52.628347  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:52.628365  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:49.590499  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:52.089170  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:54.089832  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:51.610237  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:54.112782  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:53.513932  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:55.516784  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:55.216678  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:55.231142  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:55.231225  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:55.266283  188656 cri.go:89] found id: ""
	I0731 21:02:55.266321  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.266334  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:55.266341  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:55.266399  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:55.301457  188656 cri.go:89] found id: ""
	I0731 21:02:55.301493  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.301506  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:55.301514  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:55.301574  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:55.338427  188656 cri.go:89] found id: ""
	I0731 21:02:55.338453  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.338461  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:55.338467  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:55.338521  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:55.373718  188656 cri.go:89] found id: ""
	I0731 21:02:55.373748  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.373757  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:55.373764  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:55.373846  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:55.410989  188656 cri.go:89] found id: ""
	I0731 21:02:55.411022  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.411034  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:55.411042  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:55.411100  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:55.452867  188656 cri.go:89] found id: ""
	I0731 21:02:55.452904  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.452915  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:55.452924  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:55.452995  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:55.512781  188656 cri.go:89] found id: ""
	I0731 21:02:55.512809  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.512821  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:55.512829  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:55.512894  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:55.550460  188656 cri.go:89] found id: ""
	I0731 21:02:55.550487  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.550495  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:55.550505  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:55.550521  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:55.625776  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:55.625804  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:55.625821  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:55.711276  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:55.711322  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:55.765078  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:55.765111  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:55.818131  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:55.818176  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:58.332914  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:58.346908  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:58.346992  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:58.383641  188656 cri.go:89] found id: ""
	I0731 21:02:58.383686  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.383695  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:58.383700  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:58.383753  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:58.419538  188656 cri.go:89] found id: ""
	I0731 21:02:58.419566  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.419576  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:58.419584  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:58.419649  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:58.457036  188656 cri.go:89] found id: ""
	I0731 21:02:58.457069  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.457080  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:58.457088  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:58.457162  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:58.497596  188656 cri.go:89] found id: ""
	I0731 21:02:58.497621  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.497629  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:58.497635  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:58.497706  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:58.538184  188656 cri.go:89] found id: ""
	I0731 21:02:58.538211  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.538220  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:58.538226  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:58.538291  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:58.584428  188656 cri.go:89] found id: ""
	I0731 21:02:58.584457  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.584468  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:58.584476  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:58.584537  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:58.625052  188656 cri.go:89] found id: ""
	I0731 21:02:58.625084  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.625096  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:58.625103  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:58.625171  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:58.662222  188656 cri.go:89] found id: ""
	I0731 21:02:58.662248  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.662256  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:58.662266  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:58.662278  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:58.740491  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:58.740530  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:58.782685  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:58.782714  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:58.833620  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:58.833668  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:56.091277  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:58.589516  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:56.609399  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:58.610957  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:58.013927  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:00.015179  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:58.848679  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:58.848713  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:58.925496  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:01.426171  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:01.440261  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:01.440341  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:01.477362  188656 cri.go:89] found id: ""
	I0731 21:03:01.477393  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.477405  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:01.477414  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:01.477483  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:01.516640  188656 cri.go:89] found id: ""
	I0731 21:03:01.516675  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.516692  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:01.516701  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:01.516764  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:01.560713  188656 cri.go:89] found id: ""
	I0731 21:03:01.560744  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.560756  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:01.560762  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:01.560844  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:01.604050  188656 cri.go:89] found id: ""
	I0731 21:03:01.604086  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.604097  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:01.604105  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:01.604170  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:01.641358  188656 cri.go:89] found id: ""
	I0731 21:03:01.641391  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.641401  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:01.641406  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:01.641471  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:01.677332  188656 cri.go:89] found id: ""
	I0731 21:03:01.677380  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.677390  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:01.677397  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:01.677459  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:01.713781  188656 cri.go:89] found id: ""
	I0731 21:03:01.713815  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.713826  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:01.713833  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:01.713914  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:01.757499  188656 cri.go:89] found id: ""
	I0731 21:03:01.757543  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.757552  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:01.757563  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:01.757575  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:01.832330  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:01.832370  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:01.832384  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:01.918996  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:01.919050  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:01.979268  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:01.979307  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:02.037528  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:02.037564  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:00.591373  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:03.089405  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:01.110471  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:03.611348  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:02.513998  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:05.015060  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:04.552758  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:04.566881  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:04.566960  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:04.604631  188656 cri.go:89] found id: ""
	I0731 21:03:04.604669  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.604680  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:04.604688  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:04.604791  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:04.644027  188656 cri.go:89] found id: ""
	I0731 21:03:04.644052  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.644061  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:04.644068  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:04.644134  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:04.680010  188656 cri.go:89] found id: ""
	I0731 21:03:04.680037  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.680045  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:04.680050  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:04.680102  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:04.717095  188656 cri.go:89] found id: ""
	I0731 21:03:04.717123  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.717133  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:04.717140  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:04.717212  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:04.755297  188656 cri.go:89] found id: ""
	I0731 21:03:04.755324  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.755331  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:04.755337  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:04.755387  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:04.792073  188656 cri.go:89] found id: ""
	I0731 21:03:04.792104  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.792113  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:04.792119  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:04.792168  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:04.828428  188656 cri.go:89] found id: ""
	I0731 21:03:04.828460  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.828468  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:04.828475  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:04.828541  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:04.863871  188656 cri.go:89] found id: ""
	I0731 21:03:04.863905  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.863916  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:04.863929  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:04.863946  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:04.879591  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:04.879626  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:04.962199  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:04.962227  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:04.962245  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:05.048502  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:05.048547  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:05.090812  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:05.090838  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:07.647307  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:07.664586  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:07.664656  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:07.719851  188656 cri.go:89] found id: ""
	I0731 21:03:07.719887  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.719899  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:07.719908  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:07.719978  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:07.778295  188656 cri.go:89] found id: ""
	I0731 21:03:07.778330  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.778343  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:07.778350  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:07.778417  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:07.817911  188656 cri.go:89] found id: ""
	I0731 21:03:07.817937  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.817947  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:07.817954  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:07.818004  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:07.853177  188656 cri.go:89] found id: ""
	I0731 21:03:07.853211  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.853222  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:07.853229  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:07.853308  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:07.888992  188656 cri.go:89] found id: ""
	I0731 21:03:07.889020  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.889046  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:07.889055  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:07.889133  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:07.924327  188656 cri.go:89] found id: ""
	I0731 21:03:07.924358  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.924369  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:07.924377  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:07.924461  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:07.964438  188656 cri.go:89] found id: ""
	I0731 21:03:07.964470  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.964480  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:07.964489  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:07.964572  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:08.003566  188656 cri.go:89] found id: ""
	I0731 21:03:08.003610  188656 logs.go:276] 0 containers: []
	W0731 21:03:08.003621  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:08.003634  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:08.003651  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:08.044246  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:08.044286  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:08.097479  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:08.097517  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:08.113636  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:08.113663  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:08.187217  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:08.187244  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:08.187261  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:05.090205  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:07.589488  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:06.110184  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:08.111598  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:10.611986  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:07.513036  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:09.513637  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:11.514176  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:10.771248  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:10.786159  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:10.786232  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:10.823724  188656 cri.go:89] found id: ""
	I0731 21:03:10.823756  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.823769  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:10.823777  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:10.823846  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:10.862440  188656 cri.go:89] found id: ""
	I0731 21:03:10.862468  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.862480  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:10.862488  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:10.862544  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:10.901499  188656 cri.go:89] found id: ""
	I0731 21:03:10.901527  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.901539  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:10.901547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:10.901611  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:10.940255  188656 cri.go:89] found id: ""
	I0731 21:03:10.940279  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.940287  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:10.940293  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:10.940356  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:10.975315  188656 cri.go:89] found id: ""
	I0731 21:03:10.975344  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.975353  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:10.975360  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:10.975420  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:11.011453  188656 cri.go:89] found id: ""
	I0731 21:03:11.011482  188656 logs.go:276] 0 containers: []
	W0731 21:03:11.011538  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:11.011549  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:11.011611  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:11.047846  188656 cri.go:89] found id: ""
	I0731 21:03:11.047887  188656 logs.go:276] 0 containers: []
	W0731 21:03:11.047899  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:11.047907  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:11.047972  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:11.086243  188656 cri.go:89] found id: ""
	I0731 21:03:11.086271  188656 logs.go:276] 0 containers: []
	W0731 21:03:11.086282  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:11.086293  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:11.086309  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:11.139390  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:11.139430  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:11.154637  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:11.154669  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:11.225996  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:11.226019  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:11.226035  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:11.305235  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:11.305280  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:09.589831  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:11.590312  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:14.089750  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:13.110191  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:15.112258  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:14.013609  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:16.014143  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:13.845792  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:13.859185  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:13.859261  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:13.896017  188656 cri.go:89] found id: ""
	I0731 21:03:13.896047  188656 logs.go:276] 0 containers: []
	W0731 21:03:13.896055  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:13.896061  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:13.896123  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:13.932442  188656 cri.go:89] found id: ""
	I0731 21:03:13.932475  188656 logs.go:276] 0 containers: []
	W0731 21:03:13.932486  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:13.932494  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:13.932564  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:13.971233  188656 cri.go:89] found id: ""
	I0731 21:03:13.971265  188656 logs.go:276] 0 containers: []
	W0731 21:03:13.971274  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:13.971280  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:13.971331  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:14.009757  188656 cri.go:89] found id: ""
	I0731 21:03:14.009787  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.009796  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:14.009805  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:14.009870  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:14.047946  188656 cri.go:89] found id: ""
	I0731 21:03:14.047979  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.047990  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:14.047998  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:14.048056  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:14.084687  188656 cri.go:89] found id: ""
	I0731 21:03:14.084720  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.084731  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:14.084739  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:14.084805  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:14.124831  188656 cri.go:89] found id: ""
	I0731 21:03:14.124861  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.124870  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:14.124876  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:14.124929  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:14.161242  188656 cri.go:89] found id: ""
	I0731 21:03:14.161275  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.161286  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:14.161295  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:14.161308  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:14.241060  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:14.241115  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:14.282382  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:14.282414  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:14.335201  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:14.335249  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:14.351345  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:14.351379  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:14.436524  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:16.937313  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:16.951403  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:16.951490  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:16.991735  188656 cri.go:89] found id: ""
	I0731 21:03:16.991766  188656 logs.go:276] 0 containers: []
	W0731 21:03:16.991777  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:16.991785  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:16.991852  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:17.030327  188656 cri.go:89] found id: ""
	I0731 21:03:17.030353  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.030360  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:17.030366  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:17.030419  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:17.068161  188656 cri.go:89] found id: ""
	I0731 21:03:17.068195  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.068206  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:17.068214  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:17.068286  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:17.105561  188656 cri.go:89] found id: ""
	I0731 21:03:17.105590  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.105601  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:17.105609  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:17.105684  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:17.144503  188656 cri.go:89] found id: ""
	I0731 21:03:17.144529  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.144540  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:17.144547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:17.144610  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:17.183709  188656 cri.go:89] found id: ""
	I0731 21:03:17.183738  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.183747  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:17.183753  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:17.183815  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:17.222083  188656 cri.go:89] found id: ""
	I0731 21:03:17.222109  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.222117  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:17.222124  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:17.222178  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:17.259503  188656 cri.go:89] found id: ""
	I0731 21:03:17.259534  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.259547  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:17.259561  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:17.259578  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:17.300603  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:17.300642  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:17.352194  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:17.352235  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:17.367179  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:17.367209  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:17.440051  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:17.440074  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:17.440088  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:16.589914  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:18.082985  188133 pod_ready.go:81] duration metric: took 4m0.000734125s for pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace to be "Ready" ...
	E0731 21:03:18.083015  188133 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0731 21:03:18.083039  188133 pod_ready.go:38] duration metric: took 4m12.543404692s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:03:18.083069  188133 kubeadm.go:597] duration metric: took 4m20.473129745s to restartPrimaryControlPlane
	W0731 21:03:18.083176  188133 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 21:03:18.083210  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:03:17.610274  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:19.611592  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:18.514266  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:20.514967  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:20.027644  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:20.041735  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:20.041826  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:20.077436  188656 cri.go:89] found id: ""
	I0731 21:03:20.077470  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.077483  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:20.077491  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:20.077558  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:20.117420  188656 cri.go:89] found id: ""
	I0731 21:03:20.117449  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.117459  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:20.117466  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:20.117533  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:20.157794  188656 cri.go:89] found id: ""
	I0731 21:03:20.157827  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.157838  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:20.157847  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:20.157914  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:20.193760  188656 cri.go:89] found id: ""
	I0731 21:03:20.193788  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.193796  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:20.193803  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:20.193856  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:20.231731  188656 cri.go:89] found id: ""
	I0731 21:03:20.231764  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.231777  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:20.231785  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:20.231856  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:20.268666  188656 cri.go:89] found id: ""
	I0731 21:03:20.268697  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.268709  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:20.268717  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:20.268786  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:20.304355  188656 cri.go:89] found id: ""
	I0731 21:03:20.304392  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.304406  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:20.304414  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:20.304478  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:20.343886  188656 cri.go:89] found id: ""
	I0731 21:03:20.343915  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.343927  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:20.343940  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:20.343957  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:20.358460  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:20.358494  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:20.435473  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:20.435499  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:20.435522  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:20.517961  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:20.518002  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:20.561528  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:20.561567  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:23.119570  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:23.134276  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:23.134366  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:23.172808  188656 cri.go:89] found id: ""
	I0731 21:03:23.172837  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.172846  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:23.172852  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:23.172914  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:23.208038  188656 cri.go:89] found id: ""
	I0731 21:03:23.208067  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.208080  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:23.208086  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:23.208140  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:23.244493  188656 cri.go:89] found id: ""
	I0731 21:03:23.244523  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.244533  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:23.244539  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:23.244605  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:23.280474  188656 cri.go:89] found id: ""
	I0731 21:03:23.280503  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.280510  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:23.280517  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:23.280581  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:23.317381  188656 cri.go:89] found id: ""
	I0731 21:03:23.317415  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.317428  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:23.317441  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:23.317511  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:23.357023  188656 cri.go:89] found id: ""
	I0731 21:03:23.357051  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.357062  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:23.357071  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:23.357134  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:23.400176  188656 cri.go:89] found id: ""
	I0731 21:03:23.400211  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.400223  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:23.400230  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:23.400298  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:23.440157  188656 cri.go:89] found id: ""
	I0731 21:03:23.440190  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.440201  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:23.440213  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:23.440234  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:23.494762  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:23.494802  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:23.511463  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:23.511510  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:23.600359  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:23.600383  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:23.600403  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:23.682683  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:23.682723  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:22.111495  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:24.112248  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:23.013460  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:25.014605  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:27.014900  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:26.225923  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:26.245708  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:26.245791  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:26.282882  188656 cri.go:89] found id: ""
	I0731 21:03:26.282910  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.282920  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:26.282928  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:26.282987  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:26.324227  188656 cri.go:89] found id: ""
	I0731 21:03:26.324268  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.324279  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:26.324287  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:26.324349  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:26.365996  188656 cri.go:89] found id: ""
	I0731 21:03:26.366027  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.366038  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:26.366047  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:26.366119  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:26.403790  188656 cri.go:89] found id: ""
	I0731 21:03:26.403823  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.403835  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:26.403844  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:26.403915  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:26.442924  188656 cri.go:89] found id: ""
	I0731 21:03:26.442947  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.442957  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:26.442964  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:26.443026  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:26.482260  188656 cri.go:89] found id: ""
	I0731 21:03:26.482286  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.482294  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:26.482300  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:26.482364  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:26.526385  188656 cri.go:89] found id: ""
	I0731 21:03:26.526420  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.526432  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:26.526442  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:26.526511  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:26.565217  188656 cri.go:89] found id: ""
	I0731 21:03:26.565250  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.565262  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:26.565275  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:26.565294  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:26.623437  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:26.623478  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:26.639642  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:26.639683  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:26.720274  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:26.720309  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:26.720325  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:26.799689  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:26.799728  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:26.111147  188266 pod_ready.go:81] duration metric: took 4m0.007359775s for pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace to be "Ready" ...
	E0731 21:03:26.111173  188266 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0731 21:03:26.111180  188266 pod_ready.go:38] duration metric: took 4m2.82978193s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:03:26.111195  188266 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:03:26.111220  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:26.111267  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:26.179210  188266 cri.go:89] found id: "89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:26.179240  188266 cri.go:89] found id: ""
	I0731 21:03:26.179251  188266 logs.go:276] 1 containers: [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718]
	I0731 21:03:26.179315  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.184349  188266 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:26.184430  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:26.221238  188266 cri.go:89] found id: "d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:26.221267  188266 cri.go:89] found id: ""
	I0731 21:03:26.221277  188266 logs.go:276] 1 containers: [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c]
	I0731 21:03:26.221349  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.225908  188266 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:26.225985  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:26.276864  188266 cri.go:89] found id: "987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:26.276895  188266 cri.go:89] found id: ""
	I0731 21:03:26.276907  188266 logs.go:276] 1 containers: [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025]
	I0731 21:03:26.276974  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.281921  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:26.282003  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:26.320868  188266 cri.go:89] found id: "936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:26.320903  188266 cri.go:89] found id: ""
	I0731 21:03:26.320914  188266 logs.go:276] 1 containers: [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447]
	I0731 21:03:26.320984  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.326203  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:26.326272  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:26.378409  188266 cri.go:89] found id: "c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:26.378433  188266 cri.go:89] found id: ""
	I0731 21:03:26.378442  188266 logs.go:276] 1 containers: [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e]
	I0731 21:03:26.378504  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.384006  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:26.384111  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:26.431113  188266 cri.go:89] found id: "c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:26.431147  188266 cri.go:89] found id: ""
	I0731 21:03:26.431158  188266 logs.go:276] 1 containers: [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085]
	I0731 21:03:26.431226  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.437136  188266 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:26.437213  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:26.484223  188266 cri.go:89] found id: ""
	I0731 21:03:26.484247  188266 logs.go:276] 0 containers: []
	W0731 21:03:26.484257  188266 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:26.484263  188266 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:03:26.484319  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:03:26.530433  188266 cri.go:89] found id: "701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:26.530470  188266 cri.go:89] found id: "23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:26.530476  188266 cri.go:89] found id: ""
	I0731 21:03:26.530486  188266 logs.go:276] 2 containers: [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f]
	I0731 21:03:26.530551  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.535747  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.541379  188266 logs.go:123] Gathering logs for storage-provisioner [23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f] ...
	I0731 21:03:26.541406  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:26.586730  188266 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:26.586769  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:27.133617  188266 logs.go:123] Gathering logs for kube-apiserver [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718] ...
	I0731 21:03:27.133672  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:27.183805  188266 logs.go:123] Gathering logs for etcd [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c] ...
	I0731 21:03:27.183846  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:27.226579  188266 logs.go:123] Gathering logs for kube-proxy [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e] ...
	I0731 21:03:27.226620  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:27.290635  188266 logs.go:123] Gathering logs for coredns [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025] ...
	I0731 21:03:27.290671  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:27.330700  188266 logs.go:123] Gathering logs for kube-scheduler [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447] ...
	I0731 21:03:27.330732  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:27.370882  188266 logs.go:123] Gathering logs for kube-controller-manager [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085] ...
	I0731 21:03:27.370918  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:27.426426  188266 logs.go:123] Gathering logs for storage-provisioner [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173] ...
	I0731 21:03:27.426471  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:27.466359  188266 logs.go:123] Gathering logs for container status ...
	I0731 21:03:27.466396  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:27.515202  188266 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:27.515235  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:27.569081  188266 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:27.569122  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:27.586776  188266 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:27.586809  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:03:30.223314  188266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:30.241046  188266 api_server.go:72] duration metric: took 4m14.179869513s to wait for apiserver process to appear ...
	I0731 21:03:30.241073  188266 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:03:30.241118  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:30.241188  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:30.281267  188266 cri.go:89] found id: "89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:30.281303  188266 cri.go:89] found id: ""
	I0731 21:03:30.281314  188266 logs.go:276] 1 containers: [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718]
	I0731 21:03:30.281397  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.285857  188266 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:30.285927  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:30.321742  188266 cri.go:89] found id: "d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:30.321770  188266 cri.go:89] found id: ""
	I0731 21:03:30.321779  188266 logs.go:276] 1 containers: [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c]
	I0731 21:03:30.321841  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.326210  188266 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:30.326284  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:30.367998  188266 cri.go:89] found id: "987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:30.368025  188266 cri.go:89] found id: ""
	I0731 21:03:30.368036  188266 logs.go:276] 1 containers: [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025]
	I0731 21:03:30.368101  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.372340  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:30.372412  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:30.413689  188266 cri.go:89] found id: "936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:30.413714  188266 cri.go:89] found id: ""
	I0731 21:03:30.413725  188266 logs.go:276] 1 containers: [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447]
	I0731 21:03:30.413789  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.418525  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:30.418604  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:30.458505  188266 cri.go:89] found id: "c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:30.458530  188266 cri.go:89] found id: ""
	I0731 21:03:30.458539  188266 logs.go:276] 1 containers: [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e]
	I0731 21:03:30.458587  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.462993  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:30.463058  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:30.500683  188266 cri.go:89] found id: "c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:30.500711  188266 cri.go:89] found id: ""
	I0731 21:03:30.500722  188266 logs.go:276] 1 containers: [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085]
	I0731 21:03:30.500785  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.506197  188266 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:30.506277  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:30.545243  188266 cri.go:89] found id: ""
	I0731 21:03:30.545273  188266 logs.go:276] 0 containers: []
	W0731 21:03:30.545284  188266 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:30.545290  188266 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:03:30.545371  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:03:30.588405  188266 cri.go:89] found id: "701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:30.588459  188266 cri.go:89] found id: "23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:30.588465  188266 cri.go:89] found id: ""
	I0731 21:03:30.588474  188266 logs.go:276] 2 containers: [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f]
	I0731 21:03:30.588539  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.593611  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.599345  188266 logs.go:123] Gathering logs for kube-proxy [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e] ...
	I0731 21:03:30.599386  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:30.641530  188266 logs.go:123] Gathering logs for kube-controller-manager [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085] ...
	I0731 21:03:30.641564  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:30.703655  188266 logs.go:123] Gathering logs for storage-provisioner [23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f] ...
	I0731 21:03:30.703692  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:30.744119  188266 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:30.744147  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:29.515238  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:32.014503  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:29.351214  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:29.365487  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:29.365561  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:29.402989  188656 cri.go:89] found id: ""
	I0731 21:03:29.403015  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.403022  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:29.403028  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:29.403079  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:29.443276  188656 cri.go:89] found id: ""
	I0731 21:03:29.443310  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.443321  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:29.443329  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:29.443397  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:29.483285  188656 cri.go:89] found id: ""
	I0731 21:03:29.483311  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.483319  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:29.483326  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:29.483384  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:29.522285  188656 cri.go:89] found id: ""
	I0731 21:03:29.522317  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.522329  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:29.522337  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:29.522406  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:29.565115  188656 cri.go:89] found id: ""
	I0731 21:03:29.565145  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.565155  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:29.565163  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:29.565233  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:29.603768  188656 cri.go:89] found id: ""
	I0731 21:03:29.603805  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.603816  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:29.603822  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:29.603875  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:29.640380  188656 cri.go:89] found id: ""
	I0731 21:03:29.640406  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.640416  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:29.640424  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:29.640493  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:29.679699  188656 cri.go:89] found id: ""
	I0731 21:03:29.679727  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.679736  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:29.679749  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:29.679764  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:29.735555  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:29.735603  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:29.749670  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:29.749708  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:29.825950  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:29.825973  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:29.825989  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:29.915420  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:29.915463  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:32.462996  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:32.478659  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:32.478739  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:32.528625  188656 cri.go:89] found id: ""
	I0731 21:03:32.528651  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.528659  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:32.528665  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:32.528724  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:32.574371  188656 cri.go:89] found id: ""
	I0731 21:03:32.574399  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.574408  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:32.574414  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:32.574474  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:32.616916  188656 cri.go:89] found id: ""
	I0731 21:03:32.616960  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.616970  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:32.616975  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:32.617040  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:32.657725  188656 cri.go:89] found id: ""
	I0731 21:03:32.657758  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.657769  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:32.657777  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:32.657842  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:32.693197  188656 cri.go:89] found id: ""
	I0731 21:03:32.693226  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.693237  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:32.693245  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:32.693316  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:32.733567  188656 cri.go:89] found id: ""
	I0731 21:03:32.733594  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.733602  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:32.733608  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:32.733670  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:32.774624  188656 cri.go:89] found id: ""
	I0731 21:03:32.774659  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.774671  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:32.774679  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:32.774747  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:32.811755  188656 cri.go:89] found id: ""
	I0731 21:03:32.811790  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.811809  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:32.811822  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:32.811835  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:32.825512  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:32.825544  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:32.902310  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:32.902339  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:32.902366  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:32.983347  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:32.983391  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:33.028037  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:33.028068  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:31.165988  188266 logs.go:123] Gathering logs for container status ...
	I0731 21:03:31.166042  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:31.209564  188266 logs.go:123] Gathering logs for kube-apiserver [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718] ...
	I0731 21:03:31.209605  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:31.254061  188266 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:31.254105  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:31.269227  188266 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:31.269266  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:03:31.394442  188266 logs.go:123] Gathering logs for etcd [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c] ...
	I0731 21:03:31.394477  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:31.439011  188266 logs.go:123] Gathering logs for coredns [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025] ...
	I0731 21:03:31.439047  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:31.476798  188266 logs.go:123] Gathering logs for kube-scheduler [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447] ...
	I0731 21:03:31.476825  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:31.524460  188266 logs.go:123] Gathering logs for storage-provisioner [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173] ...
	I0731 21:03:31.524491  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:31.564254  188266 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:31.564288  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:34.122836  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 21:03:34.128516  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 200:
	ok
	I0731 21:03:34.129484  188266 api_server.go:141] control plane version: v1.30.3
	I0731 21:03:34.129513  188266 api_server.go:131] duration metric: took 3.888432526s to wait for apiserver health ...
	I0731 21:03:34.129523  188266 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:03:34.129554  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:34.129622  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:34.167751  188266 cri.go:89] found id: "89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:34.167781  188266 cri.go:89] found id: ""
	I0731 21:03:34.167792  188266 logs.go:276] 1 containers: [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718]
	I0731 21:03:34.167860  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.172786  188266 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:34.172858  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:34.212172  188266 cri.go:89] found id: "d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:34.212204  188266 cri.go:89] found id: ""
	I0731 21:03:34.212215  188266 logs.go:276] 1 containers: [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c]
	I0731 21:03:34.212289  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.216651  188266 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:34.216736  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:34.263492  188266 cri.go:89] found id: "987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:34.263515  188266 cri.go:89] found id: ""
	I0731 21:03:34.263528  188266 logs.go:276] 1 containers: [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025]
	I0731 21:03:34.263592  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.268548  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:34.268630  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:34.309420  188266 cri.go:89] found id: "936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:34.309453  188266 cri.go:89] found id: ""
	I0731 21:03:34.309463  188266 logs.go:276] 1 containers: [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447]
	I0731 21:03:34.309529  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.313921  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:34.313993  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:34.354712  188266 cri.go:89] found id: "c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:34.354740  188266 cri.go:89] found id: ""
	I0731 21:03:34.354754  188266 logs.go:276] 1 containers: [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e]
	I0731 21:03:34.354818  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.359363  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:34.359446  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:34.397596  188266 cri.go:89] found id: "c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:34.397622  188266 cri.go:89] found id: ""
	I0731 21:03:34.397634  188266 logs.go:276] 1 containers: [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085]
	I0731 21:03:34.397710  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.402126  188266 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:34.402207  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:34.447198  188266 cri.go:89] found id: ""
	I0731 21:03:34.447234  188266 logs.go:276] 0 containers: []
	W0731 21:03:34.447242  188266 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:34.447248  188266 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:03:34.447304  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:03:34.487429  188266 cri.go:89] found id: "701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:34.487452  188266 cri.go:89] found id: "23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:34.487457  188266 cri.go:89] found id: ""
	I0731 21:03:34.487464  188266 logs.go:276] 2 containers: [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f]
	I0731 21:03:34.487519  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.494362  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.499409  188266 logs.go:123] Gathering logs for etcd [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c] ...
	I0731 21:03:34.499438  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:34.549761  188266 logs.go:123] Gathering logs for kube-proxy [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e] ...
	I0731 21:03:34.549802  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:34.588571  188266 logs.go:123] Gathering logs for kube-controller-manager [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085] ...
	I0731 21:03:34.588603  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:34.646590  188266 logs.go:123] Gathering logs for storage-provisioner [23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f] ...
	I0731 21:03:34.646635  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:34.691320  188266 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:34.691353  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:35.098975  188266 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:35.099018  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:35.153924  188266 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:35.153964  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:35.168091  188266 logs.go:123] Gathering logs for kube-apiserver [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718] ...
	I0731 21:03:35.168121  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:35.214469  188266 logs.go:123] Gathering logs for container status ...
	I0731 21:03:35.214511  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:35.260694  188266 logs.go:123] Gathering logs for storage-provisioner [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173] ...
	I0731 21:03:35.260724  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:35.299230  188266 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:35.299261  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:03:35.413598  188266 logs.go:123] Gathering logs for coredns [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025] ...
	I0731 21:03:35.413635  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:35.451331  188266 logs.go:123] Gathering logs for kube-scheduler [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447] ...
	I0731 21:03:35.451359  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:35.582896  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:35.597483  188656 kubeadm.go:597] duration metric: took 4m3.860422558s to restartPrimaryControlPlane
	W0731 21:03:35.597559  188656 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 21:03:35.597598  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:03:36.054326  188656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:03:36.070199  188656 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:03:36.081882  188656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:03:36.093300  188656 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:03:36.093322  188656 kubeadm.go:157] found existing configuration files:
	
	I0731 21:03:36.093396  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:03:36.103781  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:03:36.103843  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:03:36.114702  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:03:36.125213  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:03:36.125299  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:03:36.136299  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:03:36.146441  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:03:36.146520  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:03:36.157524  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:03:36.168247  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:03:36.168327  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:03:36.178875  188656 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:03:36.253662  188656 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 21:03:36.253804  188656 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:03:36.401385  188656 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:03:36.401550  188656 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:03:36.401686  188656 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:03:36.591601  188656 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:03:34.513632  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:36.515043  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:36.593492  188656 out.go:204]   - Generating certificates and keys ...
	I0731 21:03:36.593604  188656 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:03:36.593690  188656 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:03:36.593817  188656 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:03:36.593907  188656 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:03:36.594011  188656 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:03:36.594090  188656 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:03:36.594215  188656 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:03:36.594602  188656 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:03:36.595122  188656 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:03:36.595323  188656 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:03:36.595414  188656 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:03:36.595548  188656 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:03:37.052958  188656 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:03:37.178980  188656 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:03:37.375085  188656 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:03:37.550735  188656 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:03:37.571991  188656 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:03:37.575050  188656 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:03:37.575227  188656 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:03:37.707194  188656 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:03:37.997696  188266 system_pods.go:59] 8 kube-system pods found
	I0731 21:03:37.997725  188266 system_pods.go:61] "coredns-7db6d8ff4d-gnrgs" [203ddf96-11cf-4fd3-8920-aa787815ad1a] Running
	I0731 21:03:37.997730  188266 system_pods.go:61] "etcd-default-k8s-diff-port-125614" [7b9a74f5-b8df-457d-b4be-26e3f268e74a] Running
	I0731 21:03:37.997734  188266 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-125614" [a2db9a6a-1d7f-4bb6-9f54-2d74676bba7f] Running
	I0731 21:03:37.997738  188266 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-125614" [3cd52038-e450-46ea-91a7-494bdfeb386e] Running
	I0731 21:03:37.997741  188266 system_pods.go:61] "kube-proxy-csdc4" [24077c7d-f54c-4a54-9791-742327f2a9d0] Running
	I0731 21:03:37.997744  188266 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-125614" [4b9b4f87-74a3-4768-b3ca-3226a4db7105] Running
	I0731 21:03:37.997750  188266 system_pods.go:61] "metrics-server-569cc877fc-jf52w" [00b07830-8180-43c0-83c7-e68d399ae0ef] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:03:37.997754  188266 system_pods.go:61] "storage-provisioner" [efc60c19-af1b-426e-82e2-5fb9a2d1fb3a] Running
	I0731 21:03:37.997762  188266 system_pods.go:74] duration metric: took 3.868231958s to wait for pod list to return data ...
	I0731 21:03:37.997773  188266 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:03:38.000640  188266 default_sa.go:45] found service account: "default"
	I0731 21:03:38.000665  188266 default_sa.go:55] duration metric: took 2.88647ms for default service account to be created ...
	I0731 21:03:38.000672  188266 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:03:38.007107  188266 system_pods.go:86] 8 kube-system pods found
	I0731 21:03:38.007132  188266 system_pods.go:89] "coredns-7db6d8ff4d-gnrgs" [203ddf96-11cf-4fd3-8920-aa787815ad1a] Running
	I0731 21:03:38.007137  188266 system_pods.go:89] "etcd-default-k8s-diff-port-125614" [7b9a74f5-b8df-457d-b4be-26e3f268e74a] Running
	I0731 21:03:38.007142  188266 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-125614" [a2db9a6a-1d7f-4bb6-9f54-2d74676bba7f] Running
	I0731 21:03:38.007146  188266 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-125614" [3cd52038-e450-46ea-91a7-494bdfeb386e] Running
	I0731 21:03:38.007152  188266 system_pods.go:89] "kube-proxy-csdc4" [24077c7d-f54c-4a54-9791-742327f2a9d0] Running
	I0731 21:03:38.007158  188266 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-125614" [4b9b4f87-74a3-4768-b3ca-3226a4db7105] Running
	I0731 21:03:38.007164  188266 system_pods.go:89] "metrics-server-569cc877fc-jf52w" [00b07830-8180-43c0-83c7-e68d399ae0ef] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:03:38.007168  188266 system_pods.go:89] "storage-provisioner" [efc60c19-af1b-426e-82e2-5fb9a2d1fb3a] Running
	I0731 21:03:38.007175  188266 system_pods.go:126] duration metric: took 6.498733ms to wait for k8s-apps to be running ...
	I0731 21:03:38.007183  188266 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:03:38.007240  188266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:03:38.026906  188266 system_svc.go:56] duration metric: took 19.708653ms WaitForService to wait for kubelet
	I0731 21:03:38.026938  188266 kubeadm.go:582] duration metric: took 4m21.965767608s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:03:38.026969  188266 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:03:38.030479  188266 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:03:38.030554  188266 node_conditions.go:123] node cpu capacity is 2
	I0731 21:03:38.030577  188266 node_conditions.go:105] duration metric: took 3.601933ms to run NodePressure ...
	I0731 21:03:38.030600  188266 start.go:241] waiting for startup goroutines ...
	I0731 21:03:38.030611  188266 start.go:246] waiting for cluster config update ...
	I0731 21:03:38.030626  188266 start.go:255] writing updated cluster config ...
	I0731 21:03:38.031028  188266 ssh_runner.go:195] Run: rm -f paused
	I0731 21:03:38.082629  188266 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 21:03:38.084590  188266 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-125614" cluster and "default" namespace by default
	I0731 21:03:37.709295  188656 out.go:204]   - Booting up control plane ...
	I0731 21:03:37.709427  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:03:37.722549  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:03:37.723455  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:03:37.724194  188656 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:03:37.726323  188656 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 21:03:39.013773  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:41.016158  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:44.360883  188133 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.27764632s)
	I0731 21:03:44.360955  188133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:03:44.379069  188133 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:03:44.389518  188133 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:03:44.400223  188133 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:03:44.400250  188133 kubeadm.go:157] found existing configuration files:
	
	I0731 21:03:44.400302  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:03:44.410644  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:03:44.410718  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:03:44.421136  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:03:44.431161  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:03:44.431231  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:03:44.441936  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:03:44.451761  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:03:44.451820  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:03:44.462692  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:03:44.472982  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:03:44.473050  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:03:44.482980  188133 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:03:44.532539  188133 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0731 21:03:44.532637  188133 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:03:44.651505  188133 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:03:44.651654  188133 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:03:44.651772  188133 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0731 21:03:44.660564  188133 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:03:44.662559  188133 out.go:204]   - Generating certificates and keys ...
	I0731 21:03:44.662676  188133 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:03:44.662765  188133 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:03:44.662878  188133 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:03:44.662971  188133 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:03:44.663073  188133 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:03:44.663142  188133 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:03:44.663218  188133 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:03:44.663293  188133 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:03:44.663389  188133 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:03:44.663527  188133 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:03:44.663587  188133 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:03:44.663679  188133 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:03:44.813556  188133 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:03:44.908380  188133 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 21:03:45.005215  188133 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:03:45.138446  188133 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:03:45.222892  188133 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:03:45.223622  188133 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:03:45.226748  188133 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:03:43.513039  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:45.513901  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:45.228799  188133 out.go:204]   - Booting up control plane ...
	I0731 21:03:45.228934  188133 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:03:45.229087  188133 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:03:45.230021  188133 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:03:45.249145  188133 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:03:45.258184  188133 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:03:45.258267  188133 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:03:45.392726  188133 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 21:03:45.392852  188133 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 21:03:45.899754  188133 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 506.694095ms
	I0731 21:03:45.899857  188133 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 21:03:51.901713  188133 kubeadm.go:310] [api-check] The API server is healthy after 6.00194457s
	I0731 21:03:51.914947  188133 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 21:03:51.932510  188133 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 21:03:51.971055  188133 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 21:03:51.971273  188133 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-916885 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 21:03:51.985104  188133 kubeadm.go:310] [bootstrap-token] Using token: q86dx8.9ipyjyidvcwogxce
	I0731 21:03:47.515248  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:50.016206  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:51.986447  188133 out.go:204]   - Configuring RBAC rules ...
	I0731 21:03:51.986576  188133 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 21:03:51.993910  188133 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 21:03:52.002474  188133 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 21:03:52.007035  188133 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 21:03:52.011708  188133 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 21:03:52.020500  188133 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 21:03:52.310057  188133 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 21:03:52.778266  188133 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 21:03:53.308425  188133 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 21:03:53.309509  188133 kubeadm.go:310] 
	I0731 21:03:53.309585  188133 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 21:03:53.309597  188133 kubeadm.go:310] 
	I0731 21:03:53.309686  188133 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 21:03:53.309694  188133 kubeadm.go:310] 
	I0731 21:03:53.309715  188133 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 21:03:53.309771  188133 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 21:03:53.309875  188133 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 21:03:53.309894  188133 kubeadm.go:310] 
	I0731 21:03:53.310007  188133 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 21:03:53.310027  188133 kubeadm.go:310] 
	I0731 21:03:53.310088  188133 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 21:03:53.310099  188133 kubeadm.go:310] 
	I0731 21:03:53.310164  188133 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 21:03:53.310275  188133 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 21:03:53.310371  188133 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 21:03:53.310396  188133 kubeadm.go:310] 
	I0731 21:03:53.310499  188133 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 21:03:53.310601  188133 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 21:03:53.310611  188133 kubeadm.go:310] 
	I0731 21:03:53.310735  188133 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token q86dx8.9ipyjyidvcwogxce \
	I0731 21:03:53.310910  188133 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:227a67c5218b331dd5f7132ea28d97c9a8049ca33dc9adbb2077964e102f3559 \
	I0731 21:03:53.310961  188133 kubeadm.go:310] 	--control-plane 
	I0731 21:03:53.310970  188133 kubeadm.go:310] 
	I0731 21:03:53.311078  188133 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 21:03:53.311092  188133 kubeadm.go:310] 
	I0731 21:03:53.311222  188133 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token q86dx8.9ipyjyidvcwogxce \
	I0731 21:03:53.311402  188133 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:227a67c5218b331dd5f7132ea28d97c9a8049ca33dc9adbb2077964e102f3559 
	I0731 21:03:53.312409  188133 kubeadm.go:310] W0731 21:03:44.497219    2941 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0731 21:03:53.312703  188133 kubeadm.go:310] W0731 21:03:44.498106    2941 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0731 21:03:53.312811  188133 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:03:53.312857  188133 cni.go:84] Creating CNI manager for ""
	I0731 21:03:53.312870  188133 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:03:53.315035  188133 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:03:53.316406  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:03:53.327870  188133 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:03:53.352757  188133 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:03:53.352902  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:53.352919  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-916885 minikube.k8s.io/updated_at=2024_07_31T21_03_53_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825 minikube.k8s.io/name=no-preload-916885 minikube.k8s.io/primary=true
	I0731 21:03:53.403275  188133 ops.go:34] apiserver oom_adj: -16
	I0731 21:03:53.579520  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:54.080457  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:54.579898  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:55.080464  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:55.580211  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:56.080518  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:56.579806  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:57.080302  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:57.181987  188133 kubeadm.go:1113] duration metric: took 3.829153755s to wait for elevateKubeSystemPrivileges
	I0731 21:03:57.182024  188133 kubeadm.go:394] duration metric: took 4m59.623631766s to StartCluster
	I0731 21:03:57.182051  188133 settings.go:142] acquiring lock: {Name:mk1f43113b3e416d694056e4890ac819b020378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:03:57.182160  188133 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 21:03:57.185297  188133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:03:57.185586  188133 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.239 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:03:57.185672  188133 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:03:57.185753  188133 addons.go:69] Setting storage-provisioner=true in profile "no-preload-916885"
	I0731 21:03:57.185776  188133 addons.go:69] Setting default-storageclass=true in profile "no-preload-916885"
	I0731 21:03:57.185797  188133 addons.go:69] Setting metrics-server=true in profile "no-preload-916885"
	I0731 21:03:57.185825  188133 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-916885"
	I0731 21:03:57.185844  188133 addons.go:234] Setting addon metrics-server=true in "no-preload-916885"
	W0731 21:03:57.185856  188133 addons.go:243] addon metrics-server should already be in state true
	I0731 21:03:57.185864  188133 config.go:182] Loaded profile config "no-preload-916885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 21:03:57.185889  188133 host.go:66] Checking if "no-preload-916885" exists ...
	I0731 21:03:57.185785  188133 addons.go:234] Setting addon storage-provisioner=true in "no-preload-916885"
	W0731 21:03:57.185926  188133 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:03:57.185956  188133 host.go:66] Checking if "no-preload-916885" exists ...
	I0731 21:03:57.186201  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.186226  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.186247  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.186279  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.186301  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.186345  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.187280  188133 out.go:177] * Verifying Kubernetes components...
	I0731 21:03:57.188864  188133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:03:57.202393  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35433
	I0731 21:03:57.202431  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41921
	I0731 21:03:57.202856  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.202946  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.203416  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.203434  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.203688  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.203707  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.203829  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.204081  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.204270  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetState
	I0731 21:03:57.204428  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.204462  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.204960  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39305
	I0731 21:03:57.205722  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.206275  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.206291  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.208245  188133 addons.go:234] Setting addon default-storageclass=true in "no-preload-916885"
	W0731 21:03:57.208264  188133 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:03:57.208296  188133 host.go:66] Checking if "no-preload-916885" exists ...
	I0731 21:03:57.208640  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.208663  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.208866  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.209432  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.209458  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.222235  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44595
	I0731 21:03:57.222835  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.223408  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.223429  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.224137  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.224366  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetState
	I0731 21:03:57.226564  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 21:03:57.227398  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
	I0731 21:03:57.227842  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.228377  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.228399  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.228427  188133 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:03:57.228836  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.229521  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.229573  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.230036  188133 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:03:57.230056  188133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:03:57.230075  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 21:03:57.230207  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46451
	I0731 21:03:57.230601  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.230993  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.231008  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.231323  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.231519  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetState
	I0731 21:03:57.233542  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 21:03:57.235239  188133 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 21:03:52.514632  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:55.014017  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:57.235631  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.236081  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 21:03:57.236105  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.236374  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 21:03:57.236478  188133 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:03:57.236493  188133 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:03:57.236510  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 21:03:57.236545  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 21:03:57.236711  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 21:03:57.236824  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 21:03:57.238988  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.239335  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 21:03:57.239361  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.239482  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 21:03:57.239645  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 21:03:57.239775  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 21:03:57.239902  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 21:03:57.252386  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40865
	I0731 21:03:57.252846  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.253454  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.253474  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.253837  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.254048  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetState
	I0731 21:03:57.255784  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 21:03:57.256020  188133 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:03:57.256037  188133 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:03:57.256057  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 21:03:57.258870  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.259220  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 21:03:57.259254  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.259446  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 21:03:57.259612  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 21:03:57.259783  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 21:03:57.259940  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 21:03:57.405243  188133 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:03:57.426852  188133 node_ready.go:35] waiting up to 6m0s for node "no-preload-916885" to be "Ready" ...
	I0731 21:03:57.494325  188133 node_ready.go:49] node "no-preload-916885" has status "Ready":"True"
	I0731 21:03:57.494352  188133 node_ready.go:38] duration metric: took 67.471516ms for node "no-preload-916885" to be "Ready" ...
	I0731 21:03:57.494365  188133 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:03:57.497819  188133 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:03:57.497849  188133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 21:03:57.528118  188133 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:03:57.528148  188133 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:03:57.557889  188133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:03:57.568872  188133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:03:57.583099  188133 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-9qnjq" in "kube-system" namespace to be "Ready" ...
	I0731 21:03:57.587315  188133 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:03:57.587342  188133 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:03:57.645504  188133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:03:58.515635  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.515650  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.515667  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.515675  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.516054  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.516100  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.516117  188133 main.go:141] libmachine: (no-preload-916885) DBG | Closing plugin on server side
	I0731 21:03:58.516128  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.516128  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.516161  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.516187  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.516141  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.516213  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.516097  188133 main.go:141] libmachine: (no-preload-916885) DBG | Closing plugin on server side
	I0731 21:03:58.516431  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.516444  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.517889  188133 main.go:141] libmachine: (no-preload-916885) DBG | Closing plugin on server side
	I0731 21:03:58.517914  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.517930  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.569097  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.569120  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.569463  188133 main.go:141] libmachine: (no-preload-916885) DBG | Closing plugin on server side
	I0731 21:03:58.569511  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.569520  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.726076  188133 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.080526254s)
	I0731 21:03:58.726140  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.726153  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.726469  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.726490  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.726501  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.726514  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.728603  188133 main.go:141] libmachine: (no-preload-916885) DBG | Closing plugin on server side
	I0731 21:03:58.728666  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.728688  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.728715  188133 addons.go:475] Verifying addon metrics-server=true in "no-preload-916885"
	I0731 21:03:58.730520  188133 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 21:03:58.731823  188133 addons.go:510] duration metric: took 1.546157188s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 21:03:57.515366  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:59.515730  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:02.013803  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:59.593082  188133 pod_ready.go:102] pod "coredns-5cfdc65f69-9qnjq" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:00.589165  188133 pod_ready.go:92] pod "coredns-5cfdc65f69-9qnjq" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:00.589192  188133 pod_ready.go:81] duration metric: took 3.00606369s for pod "coredns-5cfdc65f69-9qnjq" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:00.589204  188133 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-bqgfg" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:02.597316  188133 pod_ready.go:102] pod "coredns-5cfdc65f69-bqgfg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:05.096168  188133 pod_ready.go:102] pod "coredns-5cfdc65f69-bqgfg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:05.597832  188133 pod_ready.go:92] pod "coredns-5cfdc65f69-bqgfg" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.597857  188133 pod_ready.go:81] duration metric: took 5.008646335s for pod "coredns-5cfdc65f69-bqgfg" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.597866  188133 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.603105  188133 pod_ready.go:92] pod "etcd-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.603128  188133 pod_ready.go:81] duration metric: took 5.254251ms for pod "etcd-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.603140  188133 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.610748  188133 pod_ready.go:92] pod "kube-apiserver-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.610771  188133 pod_ready.go:81] duration metric: took 7.623438ms for pod "kube-apiserver-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.610782  188133 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.615949  188133 pod_ready.go:92] pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.615966  188133 pod_ready.go:81] duration metric: took 5.176213ms for pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.615975  188133 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b4h2z" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.620431  188133 pod_ready.go:92] pod "kube-proxy-b4h2z" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.620450  188133 pod_ready.go:81] duration metric: took 4.469258ms for pod "kube-proxy-b4h2z" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.620458  188133 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.993080  188133 pod_ready.go:92] pod "kube-scheduler-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.993104  188133 pod_ready.go:81] duration metric: took 372.640001ms for pod "kube-scheduler-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.993112  188133 pod_ready.go:38] duration metric: took 8.498733061s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:04:05.993125  188133 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:04:05.993186  188133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:04:06.009952  188133 api_server.go:72] duration metric: took 8.824325154s to wait for apiserver process to appear ...
	I0731 21:04:06.009981  188133 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:04:06.010001  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 21:04:06.014715  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 200:
	ok
	I0731 21:04:06.015917  188133 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 21:04:06.015944  188133 api_server.go:131] duration metric: took 5.952931ms to wait for apiserver health ...
	I0731 21:04:06.015954  188133 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:04:06.196874  188133 system_pods.go:59] 9 kube-system pods found
	I0731 21:04:06.196907  188133 system_pods.go:61] "coredns-5cfdc65f69-9qnjq" [2350f15d-0e3d-429f-a21f-8cbd41407d7e] Running
	I0731 21:04:06.196914  188133 system_pods.go:61] "coredns-5cfdc65f69-bqgfg" [9010990b-36d5-4c0d-adc9-5d9483bd5d44] Running
	I0731 21:04:06.196918  188133 system_pods.go:61] "etcd-no-preload-916885" [951e730b-b153-4f75-9f7f-82d774e01853] Running
	I0731 21:04:06.196923  188133 system_pods.go:61] "kube-apiserver-no-preload-916885" [c53d3e94-2b2d-4ad5-a0a2-54c519a4c907] Running
	I0731 21:04:06.196929  188133 system_pods.go:61] "kube-controller-manager-no-preload-916885" [8de7eaf4-d6e7-41dc-a206-645821682ab2] Running
	I0731 21:04:06.196933  188133 system_pods.go:61] "kube-proxy-b4h2z" [328ebd98-accf-43da-ae60-40fc93f34116] Running
	I0731 21:04:06.196938  188133 system_pods.go:61] "kube-scheduler-no-preload-916885" [e6d18e4c-8e0d-4332-8fc3-2696261447ac] Running
	I0731 21:04:06.196945  188133 system_pods.go:61] "metrics-server-78fcd8795b-86m8h" [3c4df12a-3d52-48dc-9998-587565d13dca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:04:06.196950  188133 system_pods.go:61] "storage-provisioner" [6bfc781b-1370-4460-8018-a1279e37b39d] Running
	I0731 21:04:06.196960  188133 system_pods.go:74] duration metric: took 180.999269ms to wait for pod list to return data ...
	I0731 21:04:06.196970  188133 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:04:06.394499  188133 default_sa.go:45] found service account: "default"
	I0731 21:04:06.394530  188133 default_sa.go:55] duration metric: took 197.552628ms for default service account to be created ...
	I0731 21:04:06.394539  188133 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:04:06.598314  188133 system_pods.go:86] 9 kube-system pods found
	I0731 21:04:06.598345  188133 system_pods.go:89] "coredns-5cfdc65f69-9qnjq" [2350f15d-0e3d-429f-a21f-8cbd41407d7e] Running
	I0731 21:04:06.598354  188133 system_pods.go:89] "coredns-5cfdc65f69-bqgfg" [9010990b-36d5-4c0d-adc9-5d9483bd5d44] Running
	I0731 21:04:06.598361  188133 system_pods.go:89] "etcd-no-preload-916885" [951e730b-b153-4f75-9f7f-82d774e01853] Running
	I0731 21:04:06.598370  188133 system_pods.go:89] "kube-apiserver-no-preload-916885" [c53d3e94-2b2d-4ad5-a0a2-54c519a4c907] Running
	I0731 21:04:06.598376  188133 system_pods.go:89] "kube-controller-manager-no-preload-916885" [8de7eaf4-d6e7-41dc-a206-645821682ab2] Running
	I0731 21:04:06.598389  188133 system_pods.go:89] "kube-proxy-b4h2z" [328ebd98-accf-43da-ae60-40fc93f34116] Running
	I0731 21:04:06.598397  188133 system_pods.go:89] "kube-scheduler-no-preload-916885" [e6d18e4c-8e0d-4332-8fc3-2696261447ac] Running
	I0731 21:04:06.598408  188133 system_pods.go:89] "metrics-server-78fcd8795b-86m8h" [3c4df12a-3d52-48dc-9998-587565d13dca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:04:06.598419  188133 system_pods.go:89] "storage-provisioner" [6bfc781b-1370-4460-8018-a1279e37b39d] Running
	I0731 21:04:06.598430  188133 system_pods.go:126] duration metric: took 203.884264ms to wait for k8s-apps to be running ...
	I0731 21:04:06.598442  188133 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:04:06.598498  188133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:04:06.613642  188133 system_svc.go:56] duration metric: took 15.190132ms WaitForService to wait for kubelet
	I0731 21:04:06.613675  188133 kubeadm.go:582] duration metric: took 9.4280531s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:04:06.613705  188133 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:04:06.794163  188133 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:04:06.794191  188133 node_conditions.go:123] node cpu capacity is 2
	I0731 21:04:06.794204  188133 node_conditions.go:105] duration metric: took 180.492992ms to run NodePressure ...
	I0731 21:04:06.794218  188133 start.go:241] waiting for startup goroutines ...
	I0731 21:04:06.794227  188133 start.go:246] waiting for cluster config update ...
	I0731 21:04:06.794239  188133 start.go:255] writing updated cluster config ...
	I0731 21:04:06.794547  188133 ssh_runner.go:195] Run: rm -f paused
	I0731 21:04:06.844118  188133 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0731 21:04:06.846234  188133 out.go:177] * Done! kubectl is now configured to use "no-preload-916885" cluster and "default" namespace by default
	I0731 21:04:04.015079  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:06.514907  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:08.514958  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:11.014341  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:13.514956  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:14.014985  187862 pod_ready.go:81] duration metric: took 4m0.007784922s for pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace to be "Ready" ...
	E0731 21:04:14.015013  187862 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0731 21:04:14.015020  187862 pod_ready.go:38] duration metric: took 4m6.056814749s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:04:14.015034  187862 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:04:14.015079  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:04:14.015127  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:04:14.086254  187862 cri.go:89] found id: "dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:14.086283  187862 cri.go:89] found id: ""
	I0731 21:04:14.086293  187862 logs.go:276] 1 containers: [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473]
	I0731 21:04:14.086368  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.091267  187862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:04:14.091334  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:04:14.138577  187862 cri.go:89] found id: "7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:14.138613  187862 cri.go:89] found id: ""
	I0731 21:04:14.138624  187862 logs.go:276] 1 containers: [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e]
	I0731 21:04:14.138696  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.143245  187862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:04:14.143315  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:04:14.182295  187862 cri.go:89] found id: "1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:14.182325  187862 cri.go:89] found id: ""
	I0731 21:04:14.182336  187862 logs.go:276] 1 containers: [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084]
	I0731 21:04:14.182400  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.186861  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:04:14.186936  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:04:14.230524  187862 cri.go:89] found id: "3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:14.230547  187862 cri.go:89] found id: ""
	I0731 21:04:14.230555  187862 logs.go:276] 1 containers: [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5]
	I0731 21:04:14.230609  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.235285  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:04:14.235354  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:04:14.279188  187862 cri.go:89] found id: "b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:14.279209  187862 cri.go:89] found id: ""
	I0731 21:04:14.279217  187862 logs.go:276] 1 containers: [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845]
	I0731 21:04:14.279268  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.284280  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:04:14.284362  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:04:14.333736  187862 cri.go:89] found id: "0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:14.333764  187862 cri.go:89] found id: ""
	I0731 21:04:14.333774  187862 logs.go:276] 1 containers: [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f]
	I0731 21:04:14.333830  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.338652  187862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:04:14.338717  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:04:14.380632  187862 cri.go:89] found id: ""
	I0731 21:04:14.380663  187862 logs.go:276] 0 containers: []
	W0731 21:04:14.380672  187862 logs.go:278] No container was found matching "kindnet"
	I0731 21:04:14.380678  187862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:04:14.380747  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:04:14.424705  187862 cri.go:89] found id: "919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:14.424727  187862 cri.go:89] found id: "c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:14.424732  187862 cri.go:89] found id: ""
	I0731 21:04:14.424741  187862 logs.go:276] 2 containers: [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2]
	I0731 21:04:14.424801  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.429310  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.434243  187862 logs.go:123] Gathering logs for kubelet ...
	I0731 21:04:14.434267  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:04:14.490743  187862 logs.go:123] Gathering logs for etcd [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e] ...
	I0731 21:04:14.490782  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:14.536575  187862 logs.go:123] Gathering logs for coredns [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084] ...
	I0731 21:04:14.536613  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:14.585952  187862 logs.go:123] Gathering logs for kube-proxy [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845] ...
	I0731 21:04:14.585986  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:14.626198  187862 logs.go:123] Gathering logs for container status ...
	I0731 21:04:14.626228  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:04:14.672674  187862 logs.go:123] Gathering logs for storage-provisioner [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb] ...
	I0731 21:04:14.672712  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:14.711759  187862 logs.go:123] Gathering logs for storage-provisioner [c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2] ...
	I0731 21:04:14.711788  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:14.757020  187862 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:04:14.757047  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:04:15.286344  187862 logs.go:123] Gathering logs for dmesg ...
	I0731 21:04:15.286393  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:04:15.301933  187862 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:04:15.301969  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:04:15.451532  187862 logs.go:123] Gathering logs for kube-apiserver [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473] ...
	I0731 21:04:15.451566  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:15.502398  187862 logs.go:123] Gathering logs for kube-scheduler [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5] ...
	I0731 21:04:15.502443  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:15.544678  187862 logs.go:123] Gathering logs for kube-controller-manager [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f] ...
	I0731 21:04:15.544719  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:17.729291  188656 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 21:04:17.730290  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:04:17.730512  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:04:18.104050  187862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:04:18.121028  187862 api_server.go:72] duration metric: took 4m17.382743031s to wait for apiserver process to appear ...
	I0731 21:04:18.121061  187862 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:04:18.121109  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:04:18.121179  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:04:18.165472  187862 cri.go:89] found id: "dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:18.165498  187862 cri.go:89] found id: ""
	I0731 21:04:18.165507  187862 logs.go:276] 1 containers: [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473]
	I0731 21:04:18.165559  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.169592  187862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:04:18.169663  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:04:18.216918  187862 cri.go:89] found id: "7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:18.216942  187862 cri.go:89] found id: ""
	I0731 21:04:18.216951  187862 logs.go:276] 1 containers: [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e]
	I0731 21:04:18.217015  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.221467  187862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:04:18.221546  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:04:18.267066  187862 cri.go:89] found id: "1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:18.267089  187862 cri.go:89] found id: ""
	I0731 21:04:18.267098  187862 logs.go:276] 1 containers: [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084]
	I0731 21:04:18.267164  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.271583  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:04:18.271662  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:04:18.316381  187862 cri.go:89] found id: "3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:18.316404  187862 cri.go:89] found id: ""
	I0731 21:04:18.316412  187862 logs.go:276] 1 containers: [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5]
	I0731 21:04:18.316472  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.320859  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:04:18.320932  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:04:18.365366  187862 cri.go:89] found id: "b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:18.365396  187862 cri.go:89] found id: ""
	I0731 21:04:18.365410  187862 logs.go:276] 1 containers: [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845]
	I0731 21:04:18.365476  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.369933  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:04:18.370019  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:04:18.411121  187862 cri.go:89] found id: "0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:18.411143  187862 cri.go:89] found id: ""
	I0731 21:04:18.411152  187862 logs.go:276] 1 containers: [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f]
	I0731 21:04:18.411203  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.415493  187862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:04:18.415561  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:04:18.453040  187862 cri.go:89] found id: ""
	I0731 21:04:18.453069  187862 logs.go:276] 0 containers: []
	W0731 21:04:18.453078  187862 logs.go:278] No container was found matching "kindnet"
	I0731 21:04:18.453085  187862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:04:18.453153  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:04:18.499335  187862 cri.go:89] found id: "919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:18.499359  187862 cri.go:89] found id: "c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:18.499363  187862 cri.go:89] found id: ""
	I0731 21:04:18.499371  187862 logs.go:276] 2 containers: [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2]
	I0731 21:04:18.499446  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.504353  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.508619  187862 logs.go:123] Gathering logs for kubelet ...
	I0731 21:04:18.508640  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:04:18.562692  187862 logs.go:123] Gathering logs for kube-apiserver [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473] ...
	I0731 21:04:18.562732  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:18.623405  187862 logs.go:123] Gathering logs for coredns [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084] ...
	I0731 21:04:18.623446  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:18.679472  187862 logs.go:123] Gathering logs for kube-scheduler [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5] ...
	I0731 21:04:18.679510  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:18.728893  187862 logs.go:123] Gathering logs for storage-provisioner [c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2] ...
	I0731 21:04:18.728933  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:18.770963  187862 logs.go:123] Gathering logs for container status ...
	I0731 21:04:18.770994  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:04:18.819353  187862 logs.go:123] Gathering logs for dmesg ...
	I0731 21:04:18.819385  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:04:18.835654  187862 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:04:18.835684  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:04:18.947479  187862 logs.go:123] Gathering logs for etcd [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e] ...
	I0731 21:04:18.947516  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:18.995005  187862 logs.go:123] Gathering logs for kube-proxy [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845] ...
	I0731 21:04:18.995043  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:19.033246  187862 logs.go:123] Gathering logs for kube-controller-manager [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f] ...
	I0731 21:04:19.033274  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:19.092703  187862 logs.go:123] Gathering logs for storage-provisioner [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb] ...
	I0731 21:04:19.092740  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:19.129738  187862 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:04:19.129769  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:04:22.058935  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 21:04:22.063496  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 200:
	ok
	I0731 21:04:22.064670  187862 api_server.go:141] control plane version: v1.30.3
	I0731 21:04:22.064690  187862 api_server.go:131] duration metric: took 3.943623055s to wait for apiserver health ...
	I0731 21:04:22.064699  187862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:04:22.064721  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:04:22.064771  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:04:22.103710  187862 cri.go:89] found id: "dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:22.103733  187862 cri.go:89] found id: ""
	I0731 21:04:22.103741  187862 logs.go:276] 1 containers: [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473]
	I0731 21:04:22.103798  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.108133  187862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:04:22.108203  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:04:22.159120  187862 cri.go:89] found id: "7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:22.159145  187862 cri.go:89] found id: ""
	I0731 21:04:22.159155  187862 logs.go:276] 1 containers: [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e]
	I0731 21:04:22.159213  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.165107  187862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:04:22.165169  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:04:22.202426  187862 cri.go:89] found id: "1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:22.202454  187862 cri.go:89] found id: ""
	I0731 21:04:22.202464  187862 logs.go:276] 1 containers: [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084]
	I0731 21:04:22.202524  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.206785  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:04:22.206842  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:04:22.245008  187862 cri.go:89] found id: "3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:22.245039  187862 cri.go:89] found id: ""
	I0731 21:04:22.245050  187862 logs.go:276] 1 containers: [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5]
	I0731 21:04:22.245111  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.249467  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:04:22.249548  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:04:22.731353  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:04:22.731627  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:04:22.298105  187862 cri.go:89] found id: "b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:22.298135  187862 cri.go:89] found id: ""
	I0731 21:04:22.298145  187862 logs.go:276] 1 containers: [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845]
	I0731 21:04:22.298209  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.302845  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:04:22.302902  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:04:22.346868  187862 cri.go:89] found id: "0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:22.346898  187862 cri.go:89] found id: ""
	I0731 21:04:22.346909  187862 logs.go:276] 1 containers: [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f]
	I0731 21:04:22.346978  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.351246  187862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:04:22.351313  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:04:22.389698  187862 cri.go:89] found id: ""
	I0731 21:04:22.389730  187862 logs.go:276] 0 containers: []
	W0731 21:04:22.389742  187862 logs.go:278] No container was found matching "kindnet"
	I0731 21:04:22.389751  187862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:04:22.389817  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:04:22.425212  187862 cri.go:89] found id: "919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:22.425234  187862 cri.go:89] found id: "c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:22.425238  187862 cri.go:89] found id: ""
	I0731 21:04:22.425245  187862 logs.go:276] 2 containers: [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2]
	I0731 21:04:22.425298  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.429584  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.433471  187862 logs.go:123] Gathering logs for kube-controller-manager [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f] ...
	I0731 21:04:22.433496  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:22.490354  187862 logs.go:123] Gathering logs for storage-provisioner [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb] ...
	I0731 21:04:22.490390  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:22.530117  187862 logs.go:123] Gathering logs for dmesg ...
	I0731 21:04:22.530146  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:04:22.545249  187862 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:04:22.545281  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:04:22.658074  187862 logs.go:123] Gathering logs for kube-apiserver [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473] ...
	I0731 21:04:22.658115  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:22.711537  187862 logs.go:123] Gathering logs for etcd [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e] ...
	I0731 21:04:22.711573  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:22.758644  187862 logs.go:123] Gathering logs for coredns [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084] ...
	I0731 21:04:22.758685  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:22.796716  187862 logs.go:123] Gathering logs for kube-scheduler [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5] ...
	I0731 21:04:22.796751  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:22.843502  187862 logs.go:123] Gathering logs for storage-provisioner [c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2] ...
	I0731 21:04:22.843538  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:22.881738  187862 logs.go:123] Gathering logs for kubelet ...
	I0731 21:04:22.881765  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:04:22.936317  187862 logs.go:123] Gathering logs for kube-proxy [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845] ...
	I0731 21:04:22.936360  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:22.977562  187862 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:04:22.977592  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:04:23.354873  187862 logs.go:123] Gathering logs for container status ...
	I0731 21:04:23.354921  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:04:25.917553  187862 system_pods.go:59] 8 kube-system pods found
	I0731 21:04:25.917588  187862 system_pods.go:61] "coredns-7db6d8ff4d-2ks55" [f5ad9d76-5cdc-430e-8933-7e72a2dda95f] Running
	I0731 21:04:25.917593  187862 system_pods.go:61] "etcd-embed-certs-831240" [5236ad06-90d9-48f1-964a-efa8f56ee8b5] Running
	I0731 21:04:25.917597  187862 system_pods.go:61] "kube-apiserver-embed-certs-831240" [06290f48-0a7b-4d88-9e61-af4c7dde9acd] Running
	I0731 21:04:25.917601  187862 system_pods.go:61] "kube-controller-manager-embed-certs-831240" [5d669fab-16cd-4bc6-a880-a1ecfb5d55b2] Running
	I0731 21:04:25.917604  187862 system_pods.go:61] "kube-proxy-x662j" [9ad0d8a8-94b4-4f3e-b5da-4e5585c28d21] Running
	I0731 21:04:25.917608  187862 system_pods.go:61] "kube-scheduler-embed-certs-831240" [cf9c5922-6468-46e7-84c1-1841f5bc3446] Running
	I0731 21:04:25.917614  187862 system_pods.go:61] "metrics-server-569cc877fc-slbkm" [f93f674b-1f0e-443b-ac06-9c2a5234eeea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:04:25.917624  187862 system_pods.go:61] "storage-provisioner" [d3d5fa24-96e8-4ab5-9887-62ff8b82f21d] Running
	I0731 21:04:25.917635  187862 system_pods.go:74] duration metric: took 3.852929636s to wait for pod list to return data ...
	I0731 21:04:25.917649  187862 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:04:25.920234  187862 default_sa.go:45] found service account: "default"
	I0731 21:04:25.920256  187862 default_sa.go:55] duration metric: took 2.600194ms for default service account to be created ...
	I0731 21:04:25.920264  187862 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:04:25.926296  187862 system_pods.go:86] 8 kube-system pods found
	I0731 21:04:25.926325  187862 system_pods.go:89] "coredns-7db6d8ff4d-2ks55" [f5ad9d76-5cdc-430e-8933-7e72a2dda95f] Running
	I0731 21:04:25.926330  187862 system_pods.go:89] "etcd-embed-certs-831240" [5236ad06-90d9-48f1-964a-efa8f56ee8b5] Running
	I0731 21:04:25.926334  187862 system_pods.go:89] "kube-apiserver-embed-certs-831240" [06290f48-0a7b-4d88-9e61-af4c7dde9acd] Running
	I0731 21:04:25.926338  187862 system_pods.go:89] "kube-controller-manager-embed-certs-831240" [5d669fab-16cd-4bc6-a880-a1ecfb5d55b2] Running
	I0731 21:04:25.926342  187862 system_pods.go:89] "kube-proxy-x662j" [9ad0d8a8-94b4-4f3e-b5da-4e5585c28d21] Running
	I0731 21:04:25.926346  187862 system_pods.go:89] "kube-scheduler-embed-certs-831240" [cf9c5922-6468-46e7-84c1-1841f5bc3446] Running
	I0731 21:04:25.926352  187862 system_pods.go:89] "metrics-server-569cc877fc-slbkm" [f93f674b-1f0e-443b-ac06-9c2a5234eeea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:04:25.926356  187862 system_pods.go:89] "storage-provisioner" [d3d5fa24-96e8-4ab5-9887-62ff8b82f21d] Running
	I0731 21:04:25.926365  187862 system_pods.go:126] duration metric: took 6.094538ms to wait for k8s-apps to be running ...
	I0731 21:04:25.926373  187862 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:04:25.926433  187862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:04:25.945225  187862 system_svc.go:56] duration metric: took 18.837835ms WaitForService to wait for kubelet
	I0731 21:04:25.945264  187862 kubeadm.go:582] duration metric: took 4m25.206984451s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:04:25.945294  187862 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:04:25.948480  187862 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:04:25.948506  187862 node_conditions.go:123] node cpu capacity is 2
	I0731 21:04:25.948520  187862 node_conditions.go:105] duration metric: took 3.219175ms to run NodePressure ...
	I0731 21:04:25.948535  187862 start.go:241] waiting for startup goroutines ...
	I0731 21:04:25.948543  187862 start.go:246] waiting for cluster config update ...
	I0731 21:04:25.948556  187862 start.go:255] writing updated cluster config ...
	I0731 21:04:25.949317  187862 ssh_runner.go:195] Run: rm -f paused
	I0731 21:04:26.000525  187862 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 21:04:26.002719  187862 out.go:177] * Done! kubectl is now configured to use "embed-certs-831240" cluster and "default" namespace by default
	I0731 21:04:32.732572  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:04:32.732835  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:04:52.734257  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:04:52.734530  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:05:32.739465  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:05:32.739778  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:05:32.739796  188656 kubeadm.go:310] 
	I0731 21:05:32.739854  188656 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 21:05:32.739962  188656 kubeadm.go:310] 		timed out waiting for the condition
	I0731 21:05:32.739988  188656 kubeadm.go:310] 
	I0731 21:05:32.740034  188656 kubeadm.go:310] 	This error is likely caused by:
	I0731 21:05:32.740083  188656 kubeadm.go:310] 		- The kubelet is not running
	I0731 21:05:32.740230  188656 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 21:05:32.740245  188656 kubeadm.go:310] 
	I0731 21:05:32.740393  188656 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 21:05:32.740441  188656 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 21:05:32.740485  188656 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 21:05:32.740494  188656 kubeadm.go:310] 
	I0731 21:05:32.740624  188656 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 21:05:32.740741  188656 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 21:05:32.740752  188656 kubeadm.go:310] 
	I0731 21:05:32.740888  188656 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 21:05:32.741008  188656 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 21:05:32.741084  188656 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 21:05:32.741145  188656 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 21:05:32.741152  188656 kubeadm.go:310] 
	I0731 21:05:32.741834  188656 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:05:32.741967  188656 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 21:05:32.742066  188656 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0731 21:05:32.742264  188656 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0731 21:05:32.742340  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:05:33.227380  188656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:05:33.243864  188656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:05:33.254208  188656 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:05:33.254234  188656 kubeadm.go:157] found existing configuration files:
	
	I0731 21:05:33.254313  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:05:33.264766  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:05:33.264846  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:05:33.275517  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:05:33.286281  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:05:33.286358  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:05:33.297108  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:05:33.307555  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:05:33.307627  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:05:33.318193  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:05:33.328155  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:05:33.328220  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:05:33.338088  188656 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:05:33.569897  188656 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:07:29.725230  188656 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 21:07:29.725381  188656 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 21:07:29.726868  188656 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 21:07:29.726959  188656 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:07:29.727064  188656 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:07:29.727204  188656 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:07:29.727322  188656 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:07:29.727389  188656 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:07:29.729525  188656 out.go:204]   - Generating certificates and keys ...
	I0731 21:07:29.729659  188656 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:07:29.729761  188656 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:07:29.729918  188656 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:07:29.730026  188656 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:07:29.730126  188656 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:07:29.730268  188656 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:07:29.730369  188656 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:07:29.730461  188656 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:07:29.730555  188656 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:07:29.730658  188656 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:07:29.730713  188656 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:07:29.730790  188656 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:07:29.730856  188656 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:07:29.730931  188656 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:07:29.731014  188656 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:07:29.731111  188656 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:07:29.731248  188656 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:07:29.731339  188656 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:07:29.731395  188656 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:07:29.731486  188656 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:07:29.733052  188656 out.go:204]   - Booting up control plane ...
	I0731 21:07:29.733146  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:07:29.733226  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:07:29.733305  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:07:29.733454  188656 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:07:29.733656  188656 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 21:07:29.733735  188656 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 21:07:29.733830  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.734048  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.734116  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.734275  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.734331  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.734543  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.734642  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.734868  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.734966  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.735234  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.735252  188656 kubeadm.go:310] 
	I0731 21:07:29.735313  188656 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 21:07:29.735376  188656 kubeadm.go:310] 		timed out waiting for the condition
	I0731 21:07:29.735385  188656 kubeadm.go:310] 
	I0731 21:07:29.735432  188656 kubeadm.go:310] 	This error is likely caused by:
	I0731 21:07:29.735480  188656 kubeadm.go:310] 		- The kubelet is not running
	I0731 21:07:29.735624  188656 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 21:07:29.735634  188656 kubeadm.go:310] 
	I0731 21:07:29.735779  188656 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 21:07:29.735830  188656 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 21:07:29.735879  188656 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 21:07:29.735889  188656 kubeadm.go:310] 
	I0731 21:07:29.736038  188656 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 21:07:29.736129  188656 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 21:07:29.736141  188656 kubeadm.go:310] 
	I0731 21:07:29.736241  188656 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 21:07:29.736315  188656 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 21:07:29.736400  188656 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 21:07:29.736480  188656 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 21:07:29.736537  188656 kubeadm.go:310] 
	I0731 21:07:29.736579  188656 kubeadm.go:394] duration metric: took 7m58.053099483s to StartCluster
	I0731 21:07:29.736660  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:07:29.736793  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:07:29.802897  188656 cri.go:89] found id: ""
	I0731 21:07:29.802932  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.802945  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:07:29.802953  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:07:29.803021  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:07:29.840059  188656 cri.go:89] found id: ""
	I0731 21:07:29.840088  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.840098  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:07:29.840106  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:07:29.840178  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:07:29.881030  188656 cri.go:89] found id: ""
	I0731 21:07:29.881058  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.881066  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:07:29.881073  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:07:29.881150  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:07:29.923495  188656 cri.go:89] found id: ""
	I0731 21:07:29.923524  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.923532  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:07:29.923538  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:07:29.923604  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:07:29.966128  188656 cri.go:89] found id: ""
	I0731 21:07:29.966156  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.966164  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:07:29.966171  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:07:29.966236  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:07:30.007648  188656 cri.go:89] found id: ""
	I0731 21:07:30.007678  188656 logs.go:276] 0 containers: []
	W0731 21:07:30.007687  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:07:30.007693  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:07:30.007748  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:07:30.047857  188656 cri.go:89] found id: ""
	I0731 21:07:30.047887  188656 logs.go:276] 0 containers: []
	W0731 21:07:30.047903  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:07:30.047909  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:07:30.047959  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:07:30.087245  188656 cri.go:89] found id: ""
	I0731 21:07:30.087275  188656 logs.go:276] 0 containers: []
	W0731 21:07:30.087283  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:07:30.087294  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:07:30.087308  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:07:30.168205  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:07:30.168235  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:07:30.168256  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:07:30.276908  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:07:30.276951  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:07:30.322993  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:07:30.323030  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:07:30.375237  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:07:30.375287  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0731 21:07:30.392523  188656 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0731 21:07:30.392579  188656 out.go:239] * 
	W0731 21:07:30.392653  188656 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:07:30.392683  188656 out.go:239] * 
	W0731 21:07:30.393845  188656 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 21:07:30.397498  188656 out.go:177] 
	W0731 21:07:30.398890  188656 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:07:30.398959  188656 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0731 21:07:30.398995  188656 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0731 21:07:30.401295  188656 out.go:177] 
	
	
	==> CRI-O <==
	Jul 31 21:12:40 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:12:40.224650061Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460360224623816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=760f2580-db01-4d14-b427-0a7b8d09ff15 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:12:40 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:12:40.225392137Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7df97d2e-6f63-4152-a926-3b8b80d52a25 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:12:40 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:12:40.225464562Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7df97d2e-6f63-4152-a926-3b8b80d52a25 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:12:40 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:12:40.225729846Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:79a527efd9238c960ee7781d00091d8e65af2116e40d7d550c8f8d951f23ab0d,PodSandboxId:9bac55b298bd1b804418296dbf8030ce32f98912592975a97abab4ea208339bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722459564730394018,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5df1bbfb-71e6-41df-a194-4eecaf14017f,},Annotations:map[string]string{io.kubernetes.container.hash: e205fdc1,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025,PodSandboxId:4e4dda22151ab2d0d2a14c28d9ca17e3c1fbc0d14b2fe8f9be498bbaf13f9f38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459562028189819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gnrgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 203ddf96-11cf-4fd3-8920-aa787815ad1a,},Annotations:map[string]string{io.kubernetes.container.hash: 1ecca4db,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173,PodSandboxId:0a8448729d58dfd482bd5a49094c2fd4e4ed0e6720a8fb6924487f367ed04675,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722459554971858843,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: efc60c19-af1b-426e-82e2-5fb9a2d1fb3a,},Annotations:map[string]string{io.kubernetes.container.hash: cd476810,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e,PodSandboxId:56b1b1a1f978c26a4d8aea2f87a3ca208fcb7144a047f492332d447c822fd6b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722459554287405359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-csdc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24077c7d-f
54c-4a54-9791-742327f2a9d0,},Annotations:map[string]string{io.kubernetes.container.hash: 5126dbb8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f,PodSandboxId:0a8448729d58dfd482bd5a49094c2fd4e4ed0e6720a8fb6924487f367ed04675,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722459554233607076,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efc60c19-af1b-426e-82e2
-5fb9a2d1fb3a,},Annotations:map[string]string{io.kubernetes.container.hash: cd476810,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c,PodSandboxId:cf8a88982129cb1c91958a98584e90ab8df7808a358fff0bef4bc8f6e0b68676,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722459549641232498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed232883cfe09c6a025fdae3562ed09d,},Annotations:map[
string]string{io.kubernetes.container.hash: 5a402b30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085,PodSandboxId:9fb5f81259d301fa86a4c90e49c7318058e432e87fe6b7ce38020462786e512a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722459549650587122,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e669529bce979d2f87bc85d9b
56a4f6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447,PodSandboxId:e6e6c2fd49036f8575fa58820d4a20eca5f4b3342399d2530b0a0727071a48db,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722459549583499363,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e21e3a7b3bc1fc9b5bb85bffd07d
f30f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718,PodSandboxId:aedbb71d9cd72849d825f2a5157800099e6ea5357acbd4a8db4c3b9d6c1d969f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722459549565760379,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c778033bc3423b3264c5cb56a14ff
89,},Annotations:map[string]string{io.kubernetes.container.hash: 1dcc80dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7df97d2e-6f63-4152-a926-3b8b80d52a25 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:12:40 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:12:40.264652810Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0520f259-4475-435a-bb8a-8a13498ebd6a name=/runtime.v1.RuntimeService/Version
	Jul 31 21:12:40 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:12:40.264805154Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0520f259-4475-435a-bb8a-8a13498ebd6a name=/runtime.v1.RuntimeService/Version
	Jul 31 21:12:40 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:12:40.265778400Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b128088a-cc6d-4630-9360-3f51f066e090 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:12:40 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:12:40.266565221Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460360266541538,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b128088a-cc6d-4630-9360-3f51f066e090 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:12:40 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:12:40.267173920Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9d4d5dfe-f40e-47d6-80f0-d463e8312362 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:12:40 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:12:40.267251603Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9d4d5dfe-f40e-47d6-80f0-d463e8312362 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:12:40 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:12:40.267444177Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:79a527efd9238c960ee7781d00091d8e65af2116e40d7d550c8f8d951f23ab0d,PodSandboxId:9bac55b298bd1b804418296dbf8030ce32f98912592975a97abab4ea208339bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722459564730394018,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5df1bbfb-71e6-41df-a194-4eecaf14017f,},Annotations:map[string]string{io.kubernetes.container.hash: e205fdc1,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025,PodSandboxId:4e4dda22151ab2d0d2a14c28d9ca17e3c1fbc0d14b2fe8f9be498bbaf13f9f38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459562028189819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gnrgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 203ddf96-11cf-4fd3-8920-aa787815ad1a,},Annotations:map[string]string{io.kubernetes.container.hash: 1ecca4db,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173,PodSandboxId:0a8448729d58dfd482bd5a49094c2fd4e4ed0e6720a8fb6924487f367ed04675,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722459554971858843,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: efc60c19-af1b-426e-82e2-5fb9a2d1fb3a,},Annotations:map[string]string{io.kubernetes.container.hash: cd476810,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e,PodSandboxId:56b1b1a1f978c26a4d8aea2f87a3ca208fcb7144a047f492332d447c822fd6b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722459554287405359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-csdc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24077c7d-f
54c-4a54-9791-742327f2a9d0,},Annotations:map[string]string{io.kubernetes.container.hash: 5126dbb8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f,PodSandboxId:0a8448729d58dfd482bd5a49094c2fd4e4ed0e6720a8fb6924487f367ed04675,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722459554233607076,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efc60c19-af1b-426e-82e2
-5fb9a2d1fb3a,},Annotations:map[string]string{io.kubernetes.container.hash: cd476810,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c,PodSandboxId:cf8a88982129cb1c91958a98584e90ab8df7808a358fff0bef4bc8f6e0b68676,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722459549641232498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed232883cfe09c6a025fdae3562ed09d,},Annotations:map[
string]string{io.kubernetes.container.hash: 5a402b30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085,PodSandboxId:9fb5f81259d301fa86a4c90e49c7318058e432e87fe6b7ce38020462786e512a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722459549650587122,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e669529bce979d2f87bc85d9b
56a4f6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447,PodSandboxId:e6e6c2fd49036f8575fa58820d4a20eca5f4b3342399d2530b0a0727071a48db,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722459549583499363,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e21e3a7b3bc1fc9b5bb85bffd07d
f30f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718,PodSandboxId:aedbb71d9cd72849d825f2a5157800099e6ea5357acbd4a8db4c3b9d6c1d969f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722459549565760379,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c778033bc3423b3264c5cb56a14ff
89,},Annotations:map[string]string{io.kubernetes.container.hash: 1dcc80dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9d4d5dfe-f40e-47d6-80f0-d463e8312362 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:12:40 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:12:40.308487132Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ee33042a-91cc-4499-8f20-714abc2ccfef name=/runtime.v1.RuntimeService/Version
	Jul 31 21:12:40 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:12:40.308579170Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ee33042a-91cc-4499-8f20-714abc2ccfef name=/runtime.v1.RuntimeService/Version
	Jul 31 21:12:40 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:12:40.309864030Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7d880c54-e748-42ba-92bd-fb09e42fa1f3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:12:40 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:12:40.310308407Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460360310283378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7d880c54-e748-42ba-92bd-fb09e42fa1f3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:12:40 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:12:40.311062989Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5f3e2b1c-ba59-43a5-9497-4c0130e61994 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:12:40 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:12:40.311133715Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5f3e2b1c-ba59-43a5-9497-4c0130e61994 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:12:40 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:12:40.311347504Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:79a527efd9238c960ee7781d00091d8e65af2116e40d7d550c8f8d951f23ab0d,PodSandboxId:9bac55b298bd1b804418296dbf8030ce32f98912592975a97abab4ea208339bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722459564730394018,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5df1bbfb-71e6-41df-a194-4eecaf14017f,},Annotations:map[string]string{io.kubernetes.container.hash: e205fdc1,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025,PodSandboxId:4e4dda22151ab2d0d2a14c28d9ca17e3c1fbc0d14b2fe8f9be498bbaf13f9f38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459562028189819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gnrgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 203ddf96-11cf-4fd3-8920-aa787815ad1a,},Annotations:map[string]string{io.kubernetes.container.hash: 1ecca4db,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173,PodSandboxId:0a8448729d58dfd482bd5a49094c2fd4e4ed0e6720a8fb6924487f367ed04675,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722459554971858843,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: efc60c19-af1b-426e-82e2-5fb9a2d1fb3a,},Annotations:map[string]string{io.kubernetes.container.hash: cd476810,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e,PodSandboxId:56b1b1a1f978c26a4d8aea2f87a3ca208fcb7144a047f492332d447c822fd6b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722459554287405359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-csdc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24077c7d-f
54c-4a54-9791-742327f2a9d0,},Annotations:map[string]string{io.kubernetes.container.hash: 5126dbb8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f,PodSandboxId:0a8448729d58dfd482bd5a49094c2fd4e4ed0e6720a8fb6924487f367ed04675,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722459554233607076,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efc60c19-af1b-426e-82e2
-5fb9a2d1fb3a,},Annotations:map[string]string{io.kubernetes.container.hash: cd476810,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c,PodSandboxId:cf8a88982129cb1c91958a98584e90ab8df7808a358fff0bef4bc8f6e0b68676,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722459549641232498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed232883cfe09c6a025fdae3562ed09d,},Annotations:map[
string]string{io.kubernetes.container.hash: 5a402b30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085,PodSandboxId:9fb5f81259d301fa86a4c90e49c7318058e432e87fe6b7ce38020462786e512a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722459549650587122,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e669529bce979d2f87bc85d9b
56a4f6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447,PodSandboxId:e6e6c2fd49036f8575fa58820d4a20eca5f4b3342399d2530b0a0727071a48db,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722459549583499363,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e21e3a7b3bc1fc9b5bb85bffd07d
f30f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718,PodSandboxId:aedbb71d9cd72849d825f2a5157800099e6ea5357acbd4a8db4c3b9d6c1d969f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722459549565760379,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c778033bc3423b3264c5cb56a14ff
89,},Annotations:map[string]string{io.kubernetes.container.hash: 1dcc80dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5f3e2b1c-ba59-43a5-9497-4c0130e61994 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:12:40 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:12:40.346268524Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5c75b33e-d5fb-42a0-94f5-772c20a2f189 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:12:40 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:12:40.346358050Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5c75b33e-d5fb-42a0-94f5-772c20a2f189 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:12:40 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:12:40.347450455Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c4a991da-23db-4d7d-ba1c-76cc80d4cabd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:12:40 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:12:40.347961866Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460360347940258,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c4a991da-23db-4d7d-ba1c-76cc80d4cabd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:12:40 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:12:40.348736094Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e1c04963-ec72-4fa9-b4e3-591a44615fde name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:12:40 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:12:40.348901442Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e1c04963-ec72-4fa9-b4e3-591a44615fde name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:12:40 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:12:40.349152131Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:79a527efd9238c960ee7781d00091d8e65af2116e40d7d550c8f8d951f23ab0d,PodSandboxId:9bac55b298bd1b804418296dbf8030ce32f98912592975a97abab4ea208339bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722459564730394018,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5df1bbfb-71e6-41df-a194-4eecaf14017f,},Annotations:map[string]string{io.kubernetes.container.hash: e205fdc1,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025,PodSandboxId:4e4dda22151ab2d0d2a14c28d9ca17e3c1fbc0d14b2fe8f9be498bbaf13f9f38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459562028189819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gnrgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 203ddf96-11cf-4fd3-8920-aa787815ad1a,},Annotations:map[string]string{io.kubernetes.container.hash: 1ecca4db,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173,PodSandboxId:0a8448729d58dfd482bd5a49094c2fd4e4ed0e6720a8fb6924487f367ed04675,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722459554971858843,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: efc60c19-af1b-426e-82e2-5fb9a2d1fb3a,},Annotations:map[string]string{io.kubernetes.container.hash: cd476810,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e,PodSandboxId:56b1b1a1f978c26a4d8aea2f87a3ca208fcb7144a047f492332d447c822fd6b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722459554287405359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-csdc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24077c7d-f
54c-4a54-9791-742327f2a9d0,},Annotations:map[string]string{io.kubernetes.container.hash: 5126dbb8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f,PodSandboxId:0a8448729d58dfd482bd5a49094c2fd4e4ed0e6720a8fb6924487f367ed04675,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722459554233607076,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efc60c19-af1b-426e-82e2
-5fb9a2d1fb3a,},Annotations:map[string]string{io.kubernetes.container.hash: cd476810,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c,PodSandboxId:cf8a88982129cb1c91958a98584e90ab8df7808a358fff0bef4bc8f6e0b68676,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722459549641232498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed232883cfe09c6a025fdae3562ed09d,},Annotations:map[
string]string{io.kubernetes.container.hash: 5a402b30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085,PodSandboxId:9fb5f81259d301fa86a4c90e49c7318058e432e87fe6b7ce38020462786e512a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722459549650587122,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e669529bce979d2f87bc85d9b
56a4f6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447,PodSandboxId:e6e6c2fd49036f8575fa58820d4a20eca5f4b3342399d2530b0a0727071a48db,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722459549583499363,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e21e3a7b3bc1fc9b5bb85bffd07d
f30f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718,PodSandboxId:aedbb71d9cd72849d825f2a5157800099e6ea5357acbd4a8db4c3b9d6c1d969f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722459549565760379,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c778033bc3423b3264c5cb56a14ff
89,},Annotations:map[string]string{io.kubernetes.container.hash: 1dcc80dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e1c04963-ec72-4fa9-b4e3-591a44615fde name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	79a527efd9238       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   9bac55b298bd1       busybox
	987b733bb2bf1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   4e4dda22151ab       coredns-7db6d8ff4d-gnrgs
	701883982e5a7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       4                   0a8448729d58d       storage-provisioner
	c749bf9fffde8       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago      Running             kube-proxy                1                   56b1b1a1f978c       kube-proxy-csdc4
	23b4eaaeaafcc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       3                   0a8448729d58d       storage-provisioner
	c578f56929d84       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      13 minutes ago      Running             kube-controller-manager   1                   9fb5f81259d30       kube-controller-manager-default-k8s-diff-port-125614
	d53e71d03f523       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   cf8a88982129c       etcd-default-k8s-diff-port-125614
	936fe16f8f4b1       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      13 minutes ago      Running             kube-scheduler            1                   e6e6c2fd49036       kube-scheduler-default-k8s-diff-port-125614
	89c6731c9919d       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      13 minutes ago      Running             kube-apiserver            1                   aedbb71d9cd72       kube-apiserver-default-k8s-diff-port-125614
	
	
	==> coredns [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48122 - 28949 "HINFO IN 9147693834618869361.3872042877004081620. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02692053s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-125614
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-125614
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825
	                    minikube.k8s.io/name=default-k8s-diff-port-125614
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T20_51_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 20:51:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-125614
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 21:12:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 21:09:56 +0000   Wed, 31 Jul 2024 20:51:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 21:09:56 +0000   Wed, 31 Jul 2024 20:51:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 21:09:56 +0000   Wed, 31 Jul 2024 20:51:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 21:09:56 +0000   Wed, 31 Jul 2024 20:59:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.221
	  Hostname:    default-k8s-diff-port-125614
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0452ed95624449e1ba8d764eff3412a0
	  System UUID:                0452ed95-6244-49e1-ba8d-764eff3412a0
	  Boot ID:                    11fb6f1d-4681-4ffa-9b18-ac7420edfab8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-7db6d8ff4d-gnrgs                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-default-k8s-diff-port-125614                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-125614             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-125614    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-csdc4                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-125614             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-569cc877fc-jf52w                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-125614 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-125614 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-125614 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-125614 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-125614 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-125614 status is now: NodeHasSufficientPID
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-125614 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-125614 event: Registered Node default-k8s-diff-port-125614 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-125614 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-125614 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-125614 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-125614 event: Registered Node default-k8s-diff-port-125614 in Controller
	
	
	==> dmesg <==
	[Jul31 20:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050768] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041967] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.823661] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.556048] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.358709] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul31 20:59] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.057443] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061366] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.171325] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.146680] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[  +0.289623] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[  +4.781226] systemd-fstab-generator[812]: Ignoring "noauto" option for root device
	[  +0.061647] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.169971] systemd-fstab-generator[935]: Ignoring "noauto" option for root device
	[  +5.604904] kauditd_printk_skb: 97 callbacks suppressed
	[  +1.966337] systemd-fstab-generator[1605]: Ignoring "noauto" option for root device
	[  +3.776235] kauditd_printk_skb: 67 callbacks suppressed
	[  +6.310419] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c] <==
	{"level":"info","ts":"2024-07-31T20:59:10.303842Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7e2ae951029168ce","initial-advertise-peer-urls":["https://192.168.50.221:2380"],"listen-peer-urls":["https://192.168.50.221:2380"],"advertise-client-urls":["https://192.168.50.221:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.221:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T20:59:10.303889Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T20:59:10.303927Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.221:2380"}
	{"level":"info","ts":"2024-07-31T20:59:10.303949Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.221:2380"}
	{"level":"info","ts":"2024-07-31T20:59:11.41589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e2ae951029168ce is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T20:59:11.416024Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e2ae951029168ce became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T20:59:11.416087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e2ae951029168ce received MsgPreVoteResp from 7e2ae951029168ce at term 2"}
	{"level":"info","ts":"2024-07-31T20:59:11.416133Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e2ae951029168ce became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T20:59:11.416163Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e2ae951029168ce received MsgVoteResp from 7e2ae951029168ce at term 3"}
	{"level":"info","ts":"2024-07-31T20:59:11.416197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e2ae951029168ce became leader at term 3"}
	{"level":"info","ts":"2024-07-31T20:59:11.416274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7e2ae951029168ce elected leader 7e2ae951029168ce at term 3"}
	{"level":"info","ts":"2024-07-31T20:59:11.428553Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T20:59:11.428477Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7e2ae951029168ce","local-member-attributes":"{Name:default-k8s-diff-port-125614 ClientURLs:[https://192.168.50.221:2379]}","request-path":"/0/members/7e2ae951029168ce/attributes","cluster-id":"35ecb74b0d77a53b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T20:59:11.42928Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T20:59:11.43009Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T20:59:11.430213Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T20:59:11.431561Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T20:59:11.436276Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.221:2379"}
	{"level":"info","ts":"2024-07-31T20:59:52.74758Z","caller":"traceutil/trace.go:171","msg":"trace[2135354521] transaction","detail":"{read_only:false; response_revision:644; number_of_response:1; }","duration":"138.236485ms","start":"2024-07-31T20:59:52.609316Z","end":"2024-07-31T20:59:52.747552Z","steps":["trace[2135354521] 'process raft request'  (duration: 138.100787ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T20:59:53.407384Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"313.653088ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-jf52w\" ","response":"range_response_count:1 size:4293"}
	{"level":"info","ts":"2024-07-31T20:59:53.408163Z","caller":"traceutil/trace.go:171","msg":"trace[1826197538] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-jf52w; range_end:; response_count:1; response_revision:644; }","duration":"314.45402ms","start":"2024-07-31T20:59:53.093642Z","end":"2024-07-31T20:59:53.408096Z","steps":["trace[1826197538] 'range keys from in-memory index tree'  (duration: 313.474144ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T20:59:53.40825Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T20:59:53.09363Z","time spent":"314.599802ms","remote":"127.0.0.1:54982","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4315,"request content":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-jf52w\" "}
	{"level":"info","ts":"2024-07-31T21:09:11.46713Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":871}
	{"level":"info","ts":"2024-07-31T21:09:11.47828Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":871,"took":"10.68118ms","hash":3895722527,"current-db-size-bytes":2744320,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2744320,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-07-31T21:09:11.478397Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3895722527,"revision":871,"compact-revision":-1}
	
	
	==> kernel <==
	 21:12:40 up 13 min,  0 users,  load average: 0.09, 0.23, 0.15
	Linux default-k8s-diff-port-125614 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718] <==
	I0731 21:07:13.882911       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:09:12.883131       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:09:12.883280       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0731 21:09:13.884132       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:09:13.884261       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 21:09:13.884289       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:09:13.884157       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:09:13.884389       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 21:09:13.885326       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:10:13.884838       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:10:13.884929       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 21:10:13.884943       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:10:13.885878       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:10:13.885998       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 21:10:13.886033       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:12:13.885937       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:12:13.886316       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 21:12:13.886372       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:12:13.886252       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:12:13.886525       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 21:12:13.887724       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085] <==
	I0731 21:06:56.855269       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:07:26.368443       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:07:26.862802       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:07:56.374596       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:07:56.870931       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:08:26.380756       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:08:26.878548       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:08:56.385868       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:08:56.886391       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:09:26.391364       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:09:26.898342       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:09:56.395896       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:09:56.906582       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 21:10:13.908298       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="250.579µs"
	I0731 21:10:24.912293       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="170.152µs"
	E0731 21:10:26.400511       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:10:26.914848       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:10:56.407269       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:10:56.922776       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:11:26.414061       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:11:26.934611       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:11:56.418375       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:11:56.942803       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:12:26.423303       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:12:26.952558       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e] <==
	I0731 20:59:14.491091       1 server_linux.go:69] "Using iptables proxy"
	I0731 20:59:14.505031       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.221"]
	I0731 20:59:14.583884       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 20:59:14.583990       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 20:59:14.584027       1 server_linux.go:165] "Using iptables Proxier"
	I0731 20:59:14.596004       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 20:59:14.596236       1 server.go:872] "Version info" version="v1.30.3"
	I0731 20:59:14.596393       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 20:59:14.597577       1 config.go:192] "Starting service config controller"
	I0731 20:59:14.597635       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 20:59:14.597748       1 config.go:101] "Starting endpoint slice config controller"
	I0731 20:59:14.597772       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 20:59:14.598245       1 config.go:319] "Starting node config controller"
	I0731 20:59:14.598282       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 20:59:14.698095       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 20:59:14.698155       1 shared_informer.go:320] Caches are synced for service config
	I0731 20:59:14.698422       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447] <==
	I0731 20:59:10.436472       1 serving.go:380] Generated self-signed cert in-memory
	W0731 20:59:12.817886       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 20:59:12.817999       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 20:59:12.818039       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 20:59:12.818069       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 20:59:12.878624       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0731 20:59:12.878830       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 20:59:12.885225       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 20:59:12.885477       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 20:59:12.885527       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 20:59:12.885565       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 20:59:12.985986       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 21:10:08 default-k8s-diff-port-125614 kubelet[942]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:10:08 default-k8s-diff-port-125614 kubelet[942]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:10:08 default-k8s-diff-port-125614 kubelet[942]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:10:13 default-k8s-diff-port-125614 kubelet[942]: E0731 21:10:13.892301     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf52w" podUID="00b07830-8180-43c0-83c7-e68d399ae0ef"
	Jul 31 21:10:24 default-k8s-diff-port-125614 kubelet[942]: E0731 21:10:24.894294     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf52w" podUID="00b07830-8180-43c0-83c7-e68d399ae0ef"
	Jul 31 21:10:38 default-k8s-diff-port-125614 kubelet[942]: E0731 21:10:38.895788     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf52w" podUID="00b07830-8180-43c0-83c7-e68d399ae0ef"
	Jul 31 21:10:53 default-k8s-diff-port-125614 kubelet[942]: E0731 21:10:53.892055     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf52w" podUID="00b07830-8180-43c0-83c7-e68d399ae0ef"
	Jul 31 21:11:08 default-k8s-diff-port-125614 kubelet[942]: E0731 21:11:08.893607     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf52w" podUID="00b07830-8180-43c0-83c7-e68d399ae0ef"
	Jul 31 21:11:08 default-k8s-diff-port-125614 kubelet[942]: E0731 21:11:08.912366     942 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 21:11:08 default-k8s-diff-port-125614 kubelet[942]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:11:08 default-k8s-diff-port-125614 kubelet[942]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:11:08 default-k8s-diff-port-125614 kubelet[942]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:11:08 default-k8s-diff-port-125614 kubelet[942]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:11:22 default-k8s-diff-port-125614 kubelet[942]: E0731 21:11:22.896526     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf52w" podUID="00b07830-8180-43c0-83c7-e68d399ae0ef"
	Jul 31 21:11:35 default-k8s-diff-port-125614 kubelet[942]: E0731 21:11:35.892211     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf52w" podUID="00b07830-8180-43c0-83c7-e68d399ae0ef"
	Jul 31 21:11:50 default-k8s-diff-port-125614 kubelet[942]: E0731 21:11:50.892290     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf52w" podUID="00b07830-8180-43c0-83c7-e68d399ae0ef"
	Jul 31 21:12:04 default-k8s-diff-port-125614 kubelet[942]: E0731 21:12:04.892556     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf52w" podUID="00b07830-8180-43c0-83c7-e68d399ae0ef"
	Jul 31 21:12:08 default-k8s-diff-port-125614 kubelet[942]: E0731 21:12:08.913118     942 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 21:12:08 default-k8s-diff-port-125614 kubelet[942]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:12:08 default-k8s-diff-port-125614 kubelet[942]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:12:08 default-k8s-diff-port-125614 kubelet[942]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:12:08 default-k8s-diff-port-125614 kubelet[942]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:12:15 default-k8s-diff-port-125614 kubelet[942]: E0731 21:12:15.892554     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf52w" podUID="00b07830-8180-43c0-83c7-e68d399ae0ef"
	Jul 31 21:12:27 default-k8s-diff-port-125614 kubelet[942]: E0731 21:12:27.893647     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf52w" podUID="00b07830-8180-43c0-83c7-e68d399ae0ef"
	Jul 31 21:12:39 default-k8s-diff-port-125614 kubelet[942]: E0731 21:12:39.892405     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf52w" podUID="00b07830-8180-43c0-83c7-e68d399ae0ef"
	
	
	==> storage-provisioner [23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f] <==
	I0731 20:59:14.347865       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0731 20:59:14.350544       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173] <==
	I0731 20:59:15.083083       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 20:59:15.092570       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 20:59:15.092665       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 20:59:32.500867       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 20:59:32.501648       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-125614_af725880-2b4d-4308-9377-e920a52e7319!
	I0731 20:59:32.502590       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"65a81be9-ded5-45cb-ac18-08638a5bac46", APIVersion:"v1", ResourceVersion:"623", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-125614_af725880-2b4d-4308-9377-e920a52e7319 became leader
	I0731 20:59:32.603649       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-125614_af725880-2b4d-4308-9377-e920a52e7319!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-125614 -n default-k8s-diff-port-125614
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-125614 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-jf52w
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-125614 describe pod metrics-server-569cc877fc-jf52w
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-125614 describe pod metrics-server-569cc877fc-jf52w: exit status 1 (64.462839ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-jf52w" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-125614 describe pod metrics-server-569cc877fc-jf52w: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0731 21:04:13.435749  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/flannel-341849/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-916885 -n no-preload-916885
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-31 21:13:07.399276531 +0000 UTC m=+6359.649634354
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-916885 -n no-preload-916885
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-916885 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-916885 logs -n 25: (2.237600764s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-341849 sudo                                  | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC |                     |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-341849 sudo                                  | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-831240                                  | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:50 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| ssh     | -p bridge-341849 sudo                                  | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-341849 sudo find                             | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-341849 sudo crio                             | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-341849                                       | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-248084 | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | disable-driver-mounts-248084                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:51 UTC |
	|         | default-k8s-diff-port-125614                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-831240            | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC | 31 Jul 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-831240                                  | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-916885             | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC | 31 Jul 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-916885                                   | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-125614  | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC | 31 Jul 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC |                     |
	|         | default-k8s-diff-port-125614                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-239115        | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-831240                 | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-831240                                  | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC | 31 Jul 24 21:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-916885                  | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-916885 --memory=2200                     | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 20:54 UTC | 31 Jul 24 21:04 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-125614       | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:54 UTC | 31 Jul 24 21:03 UTC |
	|         | default-k8s-diff-port-125614                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-239115                              | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:55 UTC | 31 Jul 24 20:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-239115             | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:55 UTC | 31 Jul 24 20:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-239115                              | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 20:55:13
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 20:55:13.835355  188656 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:55:13.835514  188656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:55:13.835525  188656 out.go:304] Setting ErrFile to fd 2...
	I0731 20:55:13.835531  188656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:55:13.835717  188656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 20:55:13.836233  188656 out.go:298] Setting JSON to false
	I0731 20:55:13.837146  188656 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9450,"bootTime":1722449864,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 20:55:13.837206  188656 start.go:139] virtualization: kvm guest
	I0731 20:55:13.839094  188656 out.go:177] * [old-k8s-version-239115] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 20:55:13.840630  188656 notify.go:220] Checking for updates...
	I0731 20:55:13.840638  188656 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 20:55:13.841884  188656 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 20:55:13.843054  188656 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 20:55:13.844295  188656 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 20:55:13.845348  188656 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 20:55:13.846480  188656 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 20:55:13.847974  188656 config.go:182] Loaded profile config "old-k8s-version-239115": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 20:55:13.848349  188656 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:55:13.848390  188656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:55:13.863017  188656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46065
	I0731 20:55:13.863418  188656 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:55:13.863927  188656 main.go:141] libmachine: Using API Version  1
	I0731 20:55:13.863980  188656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:55:13.864357  188656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:55:13.864625  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:55:13.866178  188656 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 20:55:13.867248  188656 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 20:55:13.867523  188656 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:55:13.867552  188656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:55:13.881922  188656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44705
	I0731 20:55:13.882304  188656 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:55:13.882707  188656 main.go:141] libmachine: Using API Version  1
	I0731 20:55:13.882729  188656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:55:13.883037  188656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:55:13.883214  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:55:13.917067  188656 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 20:55:13.918247  188656 start.go:297] selected driver: kvm2
	I0731 20:55:13.918260  188656 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-239115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:55:13.918396  188656 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 20:55:13.919323  188656 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:55:13.919428  188656 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-121704/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 20:55:13.934150  188656 install.go:137] /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0731 20:55:13.934506  188656 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 20:55:13.934569  188656 cni.go:84] Creating CNI manager for ""
	I0731 20:55:13.934583  188656 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:55:13.934630  188656 start.go:340] cluster config:
	{Name:old-k8s-version-239115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239115 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:55:13.934737  188656 iso.go:125] acquiring lock: {Name:mk69fc0fe37180b19ce91307b5e12ab2d0bd69fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:55:13.936401  188656 out.go:177] * Starting "old-k8s-version-239115" primary control-plane node in "old-k8s-version-239115" cluster
	I0731 20:55:13.769565  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:13.937700  188656 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 20:55:13.937735  188656 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0731 20:55:13.937743  188656 cache.go:56] Caching tarball of preloaded images
	I0731 20:55:13.937806  188656 preload.go:172] Found /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 20:55:13.937816  188656 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0731 20:55:13.937907  188656 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/config.json ...
	I0731 20:55:13.938068  188656 start.go:360] acquireMachinesLock for old-k8s-version-239115: {Name:mk0ee20c9dba367bb5e62f6affdfe6f589095d2a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 20:55:19.845616  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:22.917614  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:28.997601  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:32.069596  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:38.149607  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:41.221579  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:47.301587  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:50.373695  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:56.453611  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:59.525649  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:05.605640  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:08.677654  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:14.757599  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:17.829627  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:23.909581  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:26.981613  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:33.061611  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:36.133597  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:42.213638  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:45.285703  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:51.365653  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:54.437615  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:00.517627  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:03.589595  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:09.669666  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:12.741661  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:18.821643  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:21.893594  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:27.973636  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:31.045651  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:37.125619  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:40.197656  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:46.277679  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:49.349535  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:55.429634  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:58.501611  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:58:04.581620  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:58:07.653642  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:58:13.733571  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:58:16.805674  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:58:19.809697  188133 start.go:364] duration metric: took 4m15.439364065s to acquireMachinesLock for "no-preload-916885"
	I0731 20:58:19.809748  188133 start.go:96] Skipping create...Using existing machine configuration
	I0731 20:58:19.809756  188133 fix.go:54] fixHost starting: 
	I0731 20:58:19.810113  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:58:19.810149  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:58:19.825131  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40671
	I0731 20:58:19.825615  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:58:19.826110  188133 main.go:141] libmachine: Using API Version  1
	I0731 20:58:19.826132  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:58:19.826439  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:58:19.826616  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:19.826840  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetState
	I0731 20:58:19.828267  188133 fix.go:112] recreateIfNeeded on no-preload-916885: state=Stopped err=<nil>
	I0731 20:58:19.828294  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	W0731 20:58:19.828471  188133 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 20:58:19.829957  188133 out.go:177] * Restarting existing kvm2 VM for "no-preload-916885" ...
	I0731 20:58:19.807506  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:58:19.807579  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetMachineName
	I0731 20:58:19.807919  187862 buildroot.go:166] provisioning hostname "embed-certs-831240"
	I0731 20:58:19.807946  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetMachineName
	I0731 20:58:19.808126  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:58:19.809580  187862 machine.go:97] duration metric: took 4m37.431426503s to provisionDockerMachine
	I0731 20:58:19.809625  187862 fix.go:56] duration metric: took 4m37.4520345s for fixHost
	I0731 20:58:19.809631  187862 start.go:83] releasing machines lock for "embed-certs-831240", held for 4m37.452053341s
	W0731 20:58:19.809664  187862 start.go:714] error starting host: provision: host is not running
	W0731 20:58:19.809893  187862 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0731 20:58:19.809916  187862 start.go:729] Will try again in 5 seconds ...
	I0731 20:58:19.831221  188133 main.go:141] libmachine: (no-preload-916885) Calling .Start
	I0731 20:58:19.831409  188133 main.go:141] libmachine: (no-preload-916885) Ensuring networks are active...
	I0731 20:58:19.832210  188133 main.go:141] libmachine: (no-preload-916885) Ensuring network default is active
	I0731 20:58:19.832536  188133 main.go:141] libmachine: (no-preload-916885) Ensuring network mk-no-preload-916885 is active
	I0731 20:58:19.832885  188133 main.go:141] libmachine: (no-preload-916885) Getting domain xml...
	I0731 20:58:19.833563  188133 main.go:141] libmachine: (no-preload-916885) Creating domain...
	I0731 20:58:21.031310  188133 main.go:141] libmachine: (no-preload-916885) Waiting to get IP...
	I0731 20:58:21.032067  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:21.032519  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:21.032626  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:21.032509  189287 retry.go:31] will retry after 207.547113ms: waiting for machine to come up
	I0731 20:58:21.242229  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:21.242716  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:21.242797  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:21.242683  189287 retry.go:31] will retry after 307.483232ms: waiting for machine to come up
	I0731 20:58:21.552437  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:21.552954  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:21.552977  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:21.552911  189287 retry.go:31] will retry after 441.063904ms: waiting for machine to come up
	I0731 20:58:21.995514  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:21.995860  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:21.995903  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:21.995813  189287 retry.go:31] will retry after 596.915537ms: waiting for machine to come up
	I0731 20:58:22.594563  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:22.595037  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:22.595079  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:22.594988  189287 retry.go:31] will retry after 471.207023ms: waiting for machine to come up
	I0731 20:58:23.067499  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:23.067926  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:23.067950  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:23.067899  189287 retry.go:31] will retry after 756.851428ms: waiting for machine to come up
	I0731 20:58:23.826869  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:23.827277  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:23.827305  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:23.827232  189287 retry.go:31] will retry after 981.303239ms: waiting for machine to come up
	I0731 20:58:24.810830  187862 start.go:360] acquireMachinesLock for embed-certs-831240: {Name:mk0ee20c9dba367bb5e62f6affdfe6f589095d2a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 20:58:24.810239  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:24.810615  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:24.810651  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:24.810584  189287 retry.go:31] will retry after 1.18169902s: waiting for machine to come up
	I0731 20:58:25.994320  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:25.994700  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:25.994728  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:25.994635  189287 retry.go:31] will retry after 1.781207961s: waiting for machine to come up
	I0731 20:58:27.778381  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:27.778764  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:27.778805  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:27.778734  189287 retry.go:31] will retry after 1.885603462s: waiting for machine to come up
	I0731 20:58:29.665633  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:29.666049  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:29.666070  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:29.666026  189287 retry.go:31] will retry after 2.664379174s: waiting for machine to come up
	I0731 20:58:32.333226  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:32.333615  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:32.333644  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:32.333594  189287 retry.go:31] will retry after 2.932420774s: waiting for machine to come up
	I0731 20:58:35.267165  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:35.267527  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:35.267558  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:35.267496  189287 retry.go:31] will retry after 4.378841892s: waiting for machine to come up
	I0731 20:58:41.010483  188266 start.go:364] duration metric: took 4m25.11688001s to acquireMachinesLock for "default-k8s-diff-port-125614"
	I0731 20:58:41.010557  188266 start.go:96] Skipping create...Using existing machine configuration
	I0731 20:58:41.010566  188266 fix.go:54] fixHost starting: 
	I0731 20:58:41.010992  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:58:41.011033  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:58:41.030450  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41629
	I0731 20:58:41.030910  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:58:41.031360  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:58:41.031382  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:58:41.031703  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:58:41.031859  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:58:41.032020  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetState
	I0731 20:58:41.033653  188266 fix.go:112] recreateIfNeeded on default-k8s-diff-port-125614: state=Stopped err=<nil>
	I0731 20:58:41.033695  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	W0731 20:58:41.033872  188266 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 20:58:41.035898  188266 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-125614" ...
	I0731 20:58:39.650969  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.651458  188133 main.go:141] libmachine: (no-preload-916885) Found IP for machine: 192.168.72.239
	I0731 20:58:39.651475  188133 main.go:141] libmachine: (no-preload-916885) Reserving static IP address...
	I0731 20:58:39.651516  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has current primary IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.651957  188133 main.go:141] libmachine: (no-preload-916885) Reserved static IP address: 192.168.72.239
	I0731 20:58:39.651995  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "no-preload-916885", mac: "52:54:00:46:b1:6a", ip: "192.168.72.239"} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:39.652023  188133 main.go:141] libmachine: (no-preload-916885) Waiting for SSH to be available...
	I0731 20:58:39.652054  188133 main.go:141] libmachine: (no-preload-916885) DBG | skip adding static IP to network mk-no-preload-916885 - found existing host DHCP lease matching {name: "no-preload-916885", mac: "52:54:00:46:b1:6a", ip: "192.168.72.239"}
	I0731 20:58:39.652073  188133 main.go:141] libmachine: (no-preload-916885) DBG | Getting to WaitForSSH function...
	I0731 20:58:39.654095  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.654450  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:39.654479  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.654636  188133 main.go:141] libmachine: (no-preload-916885) DBG | Using SSH client type: external
	I0731 20:58:39.654659  188133 main.go:141] libmachine: (no-preload-916885) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa (-rw-------)
	I0731 20:58:39.654714  188133 main.go:141] libmachine: (no-preload-916885) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.239 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:58:39.654729  188133 main.go:141] libmachine: (no-preload-916885) DBG | About to run SSH command:
	I0731 20:58:39.654768  188133 main.go:141] libmachine: (no-preload-916885) DBG | exit 0
	I0731 20:58:39.781409  188133 main.go:141] libmachine: (no-preload-916885) DBG | SSH cmd err, output: <nil>: 
	I0731 20:58:39.781741  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetConfigRaw
	I0731 20:58:39.782349  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetIP
	I0731 20:58:39.784813  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.785234  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:39.785266  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.785643  188133 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/config.json ...
	I0731 20:58:39.785859  188133 machine.go:94] provisionDockerMachine start ...
	I0731 20:58:39.785879  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:39.786095  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:39.788573  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.788840  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:39.788868  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.789025  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:39.789203  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:39.789374  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:39.789495  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:39.789661  188133 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:39.789927  188133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.239 22 <nil> <nil>}
	I0731 20:58:39.789941  188133 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 20:58:39.901661  188133 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 20:58:39.901687  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetMachineName
	I0731 20:58:39.901920  188133 buildroot.go:166] provisioning hostname "no-preload-916885"
	I0731 20:58:39.901953  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetMachineName
	I0731 20:58:39.902142  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:39.904763  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.905159  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:39.905186  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.905347  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:39.905534  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:39.905698  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:39.905822  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:39.905977  188133 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:39.906137  188133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.239 22 <nil> <nil>}
	I0731 20:58:39.906155  188133 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-916885 && echo "no-preload-916885" | sudo tee /etc/hostname
	I0731 20:58:40.030955  188133 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-916885
	
	I0731 20:58:40.030979  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.033905  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.034254  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.034276  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.034487  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:40.034693  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.034868  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.035014  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:40.035197  188133 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:40.035373  188133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.239 22 <nil> <nil>}
	I0731 20:58:40.035392  188133 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-916885' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-916885/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-916885' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:58:40.154331  188133 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:58:40.154381  188133 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 20:58:40.154436  188133 buildroot.go:174] setting up certificates
	I0731 20:58:40.154452  188133 provision.go:84] configureAuth start
	I0731 20:58:40.154474  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetMachineName
	I0731 20:58:40.154813  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetIP
	I0731 20:58:40.157702  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.158053  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.158075  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.158218  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.160715  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.161030  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.161048  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.161186  188133 provision.go:143] copyHostCerts
	I0731 20:58:40.161258  188133 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 20:58:40.161267  188133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 20:58:40.161372  188133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 20:58:40.161477  188133 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 20:58:40.161487  188133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 20:58:40.161520  188133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 20:58:40.161590  188133 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 20:58:40.161606  188133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 20:58:40.161639  188133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 20:58:40.161700  188133 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.no-preload-916885 san=[127.0.0.1 192.168.72.239 localhost minikube no-preload-916885]
	I0731 20:58:40.341529  188133 provision.go:177] copyRemoteCerts
	I0731 20:58:40.341586  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:58:40.341612  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.344557  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.344851  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.344871  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.345080  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:40.345266  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.345432  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:40.345677  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 20:58:40.431395  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:58:40.455012  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 20:58:40.477721  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 20:58:40.500174  188133 provision.go:87] duration metric: took 345.705192ms to configureAuth
	I0731 20:58:40.500203  188133 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:58:40.500377  188133 config.go:182] Loaded profile config "no-preload-916885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 20:58:40.500462  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.503077  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.503438  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.503467  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.503586  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:40.503780  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.503947  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.504065  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:40.504245  188133 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:40.504467  188133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.239 22 <nil> <nil>}
	I0731 20:58:40.504489  188133 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:58:40.765409  188133 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:58:40.765448  188133 machine.go:97] duration metric: took 979.574417ms to provisionDockerMachine
	I0731 20:58:40.765460  188133 start.go:293] postStartSetup for "no-preload-916885" (driver="kvm2")
	I0731 20:58:40.765474  188133 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:58:40.765525  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:40.765895  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:58:40.765928  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.768314  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.768610  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.768657  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.768760  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:40.768926  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.769089  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:40.769199  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 20:58:40.855821  188133 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:58:40.860032  188133 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:58:40.860071  188133 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 20:58:40.860148  188133 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 20:58:40.860251  188133 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 20:58:40.860367  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:58:40.869291  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:58:40.892945  188133 start.go:296] duration metric: took 127.469545ms for postStartSetup
	I0731 20:58:40.892991  188133 fix.go:56] duration metric: took 21.083232755s for fixHost
	I0731 20:58:40.893019  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.895784  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.896166  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.896197  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.896316  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:40.896501  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.896654  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.896777  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:40.896964  188133 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:40.897133  188133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.239 22 <nil> <nil>}
	I0731 20:58:40.897143  188133 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:58:41.010330  188133 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722459520.969906971
	
	I0731 20:58:41.010352  188133 fix.go:216] guest clock: 1722459520.969906971
	I0731 20:58:41.010360  188133 fix.go:229] Guest: 2024-07-31 20:58:40.969906971 +0000 UTC Remote: 2024-07-31 20:58:40.892995844 +0000 UTC m=+276.656012666 (delta=76.911127ms)
	I0731 20:58:41.010390  188133 fix.go:200] guest clock delta is within tolerance: 76.911127ms
	I0731 20:58:41.010396  188133 start.go:83] releasing machines lock for "no-preload-916885", held for 21.200662427s
	I0731 20:58:41.010429  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:41.010733  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetIP
	I0731 20:58:41.013519  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.013841  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:41.013867  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.014034  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:41.014637  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:41.014829  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:41.014914  188133 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:58:41.014974  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:41.015051  188133 ssh_runner.go:195] Run: cat /version.json
	I0731 20:58:41.015074  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:41.017813  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.017837  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.018170  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:41.018205  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:41.018225  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.018239  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.018482  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:41.018493  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:41.018678  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:41.018694  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:41.018862  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:41.018885  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:41.018965  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 20:58:41.019040  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 20:58:41.107999  188133 ssh_runner.go:195] Run: systemctl --version
	I0731 20:58:41.133039  188133 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:58:41.279485  188133 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:58:41.285765  188133 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:58:41.285838  188133 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:58:41.302175  188133 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:58:41.302203  188133 start.go:495] detecting cgroup driver to use...
	I0731 20:58:41.302280  188133 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:58:41.319896  188133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:58:41.334618  188133 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:58:41.334689  188133 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:58:41.348292  188133 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:58:41.363968  188133 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:58:41.472992  188133 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:58:41.605581  188133 docker.go:233] disabling docker service ...
	I0731 20:58:41.605669  188133 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:58:41.620414  188133 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:58:41.632951  188133 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:58:41.783942  188133 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:58:41.912311  188133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:58:41.931076  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:58:41.954672  188133 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0731 20:58:41.954752  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:41.967478  188133 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:58:41.967567  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:41.978990  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:41.991689  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:42.003168  188133 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:58:42.019114  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:42.034607  188133 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:42.057543  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:42.070420  188133 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:58:42.081173  188133 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:58:42.081245  188133 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:58:42.095455  188133 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:58:42.106943  188133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:58:42.221724  188133 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:58:42.375966  188133 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:58:42.376051  188133 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:58:42.381473  188133 start.go:563] Will wait 60s for crictl version
	I0731 20:58:42.381548  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:42.385364  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:58:42.426783  188133 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:58:42.426872  188133 ssh_runner.go:195] Run: crio --version
	I0731 20:58:42.459096  188133 ssh_runner.go:195] Run: crio --version
	I0731 20:58:42.490043  188133 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0731 20:58:42.491578  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetIP
	I0731 20:58:42.494915  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:42.495289  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:42.495310  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:42.495610  188133 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0731 20:58:42.500266  188133 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:58:42.515164  188133 kubeadm.go:883] updating cluster {Name:no-preload-916885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-916885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.239 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:58:42.515295  188133 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 20:58:42.515332  188133 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:58:42.551930  188133 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0731 20:58:42.551961  188133 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 20:58:42.552025  188133 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:58:42.552047  188133 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0731 20:58:42.552067  188133 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 20:58:42.552087  188133 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 20:58:42.552071  188133 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 20:58:42.552028  188133 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 20:58:42.552129  188133 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 20:58:42.552035  188133 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0731 20:58:42.554026  188133 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 20:58:42.554044  188133 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 20:58:42.554103  188133 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0731 20:58:42.554112  188133 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0731 20:58:42.554123  188133 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:58:42.554030  188133 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 20:58:42.554032  188133 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 20:58:42.554027  188133 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 20:58:42.721659  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0731 20:58:42.743910  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0731 20:58:42.750941  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0731 20:58:42.772074  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 20:58:42.781921  188133 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0731 20:58:42.781964  188133 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 20:58:42.782014  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:42.793926  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 20:58:42.813112  188133 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0731 20:58:42.813154  188133 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0731 20:58:42.813202  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:42.916544  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 20:58:42.937647  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 20:58:42.948145  188133 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0731 20:58:42.948194  188133 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 20:58:42.948208  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0731 20:58:42.948237  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:42.948268  188133 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0731 20:58:42.948300  188133 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 20:58:42.948338  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0731 20:58:42.948341  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:43.006187  188133 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0731 20:58:43.006238  188133 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 20:58:43.006295  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:43.045484  188133 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0731 20:58:43.045541  188133 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 20:58:43.045585  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:43.045589  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 20:58:43.045643  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0731 20:58:43.045710  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0731 20:58:43.045730  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0731 20:58:43.045741  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 20:58:43.045780  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 20:58:43.045823  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0731 20:58:43.122382  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0731 20:58:43.122429  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0731 20:58:43.122449  188133 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0731 20:58:43.122489  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 20:58:43.122497  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0731 20:58:43.122513  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0731 20:58:43.122517  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0731 20:58:43.122588  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 20:58:43.122637  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 20:58:43.122643  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0731 20:58:43.122731  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 20:58:43.522969  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:58:41.037393  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Start
	I0731 20:58:41.037575  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Ensuring networks are active...
	I0731 20:58:41.038366  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Ensuring network default is active
	I0731 20:58:41.038703  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Ensuring network mk-default-k8s-diff-port-125614 is active
	I0731 20:58:41.039402  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Getting domain xml...
	I0731 20:58:41.040218  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Creating domain...
	I0731 20:58:42.319123  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting to get IP...
	I0731 20:58:42.320314  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.320801  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.320908  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:42.320797  189429 retry.go:31] will retry after 274.801111ms: waiting for machine to come up
	I0731 20:58:42.597444  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.597866  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.597914  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:42.597842  189429 retry.go:31] will retry after 382.328248ms: waiting for machine to come up
	I0731 20:58:42.981533  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.982018  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.982051  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:42.981955  189429 retry.go:31] will retry after 426.247953ms: waiting for machine to come up
	I0731 20:58:43.409523  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:43.409839  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:43.409867  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:43.409795  189429 retry.go:31] will retry after 483.501118ms: waiting for machine to come up
	I0731 20:58:43.894451  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:43.894844  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:43.894874  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:43.894779  189429 retry.go:31] will retry after 759.968593ms: waiting for machine to come up
	I0731 20:58:44.656097  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:44.656551  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:44.656580  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:44.656503  189429 retry.go:31] will retry after 766.563008ms: waiting for machine to come up
	I0731 20:58:45.424382  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:45.424793  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:45.424831  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:45.424744  189429 retry.go:31] will retry after 1.172047019s: waiting for machine to come up
	I0731 20:58:45.107333  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.984807614s)
	I0731 20:58:45.107368  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0731 20:58:45.107393  188133 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0731 20:58:45.107452  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0731 20:58:45.107471  188133 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0: (1.98485492s)
	I0731 20:58:45.107523  188133 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.985012474s)
	I0731 20:58:45.107534  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0731 20:58:45.107560  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0731 20:58:45.107563  188133 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.984910291s)
	I0731 20:58:45.107585  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0731 20:58:45.107609  188133 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.984862504s)
	I0731 20:58:45.107619  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 20:58:45.107626  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0731 20:58:45.107668  188133 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.584674739s)
	I0731 20:58:45.107701  188133 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0731 20:58:45.107729  188133 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:58:45.107761  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:48.706832  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.599347822s)
	I0731 20:58:48.706872  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0731 20:58:48.706886  188133 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (3.599247467s)
	I0731 20:58:48.706923  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0731 20:58:48.706898  188133 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 20:58:48.706925  188133 ssh_runner.go:235] Completed: which crictl: (3.599146318s)
	I0731 20:58:48.706979  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:58:48.706980  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 20:58:48.747292  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 20:58:48.747415  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0731 20:58:46.598636  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:46.599086  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:46.599117  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:46.599033  189429 retry.go:31] will retry after 1.204122239s: waiting for machine to come up
	I0731 20:58:47.805441  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:47.805922  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:47.805953  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:47.805864  189429 retry.go:31] will retry after 1.466632244s: waiting for machine to come up
	I0731 20:58:49.274527  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:49.275004  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:49.275030  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:49.274961  189429 retry.go:31] will retry after 2.04848438s: waiting for machine to come up
	I0731 20:58:50.902082  188133 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.154633427s)
	I0731 20:58:50.902138  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0731 20:58:50.902203  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.195118092s)
	I0731 20:58:50.902226  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0731 20:58:50.902259  188133 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 20:58:50.902320  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 20:58:52.863335  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.960989386s)
	I0731 20:58:52.863370  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0731 20:58:52.863394  188133 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 20:58:52.863434  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 20:58:51.324633  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:51.325056  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:51.325080  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:51.324983  189429 retry.go:31] will retry after 1.991151757s: waiting for machine to come up
	I0731 20:58:53.318784  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:53.319133  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:53.319164  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:53.319077  189429 retry.go:31] will retry after 2.631932264s: waiting for machine to come up
	I0731 20:58:54.629811  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.766355185s)
	I0731 20:58:54.629840  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0731 20:58:54.629882  188133 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 20:58:54.629954  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 20:58:55.983610  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.353622135s)
	I0731 20:58:55.983655  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0731 20:58:55.983692  188133 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0731 20:58:55.983764  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0731 20:58:56.828512  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0731 20:58:56.828560  188133 cache_images.go:123] Successfully loaded all cached images
	I0731 20:58:56.828568  188133 cache_images.go:92] duration metric: took 14.276593942s to LoadCachedImages
	I0731 20:58:56.828583  188133 kubeadm.go:934] updating node { 192.168.72.239 8443 v1.31.0-beta.0 crio true true} ...
	I0731 20:58:56.828722  188133 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-916885 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.239
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-916885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:58:56.828806  188133 ssh_runner.go:195] Run: crio config
	I0731 20:58:56.877187  188133 cni.go:84] Creating CNI manager for ""
	I0731 20:58:56.877222  188133 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:58:56.877245  188133 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:58:56.877269  188133 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.239 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-916885 NodeName:no-preload-916885 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.239"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.239 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 20:58:56.877442  188133 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.239
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-916885"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.239
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.239"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:58:56.877526  188133 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0731 20:58:56.887721  188133 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:58:56.887796  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 20:58:56.896845  188133 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0731 20:58:56.912886  188133 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0731 20:58:56.928914  188133 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0731 20:58:56.945604  188133 ssh_runner.go:195] Run: grep 192.168.72.239	control-plane.minikube.internal$ /etc/hosts
	I0731 20:58:56.949538  188133 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.239	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:58:56.961490  188133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:58:57.075114  188133 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:58:57.091701  188133 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885 for IP: 192.168.72.239
	I0731 20:58:57.091724  188133 certs.go:194] generating shared ca certs ...
	I0731 20:58:57.091743  188133 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:58:57.091909  188133 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 20:58:57.091959  188133 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 20:58:57.091971  188133 certs.go:256] generating profile certs ...
	I0731 20:58:57.092062  188133 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/client.key
	I0731 20:58:57.092141  188133 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/apiserver.key.cc7e9c96
	I0731 20:58:57.092193  188133 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/proxy-client.key
	I0731 20:58:57.092330  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 20:58:57.092405  188133 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 20:58:57.092423  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 20:58:57.092458  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:58:57.092489  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:58:57.092520  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 20:58:57.092586  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:58:57.093296  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:58:57.139431  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:58:57.169132  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:58:57.196541  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 20:58:57.232826  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 20:58:57.260875  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 20:58:57.290195  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:58:57.316645  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 20:58:57.339741  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:58:57.362406  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 20:58:57.385009  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 20:58:57.407540  188133 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:58:57.423697  188133 ssh_runner.go:195] Run: openssl version
	I0731 20:58:57.429741  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:58:57.440545  188133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:58:57.444984  188133 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:58:57.445035  188133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:58:57.450651  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:58:57.460547  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 20:58:57.470575  188133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 20:58:57.474939  188133 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 20:58:57.474988  188133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 20:58:57.480481  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 20:58:57.490404  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 20:58:57.500433  188133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 20:58:57.504785  188133 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 20:58:57.504835  188133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 20:58:57.510165  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:58:57.520019  188133 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:58:57.524596  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 20:58:57.530667  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 20:58:57.536315  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 20:58:57.542049  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 20:58:57.547594  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 20:58:57.553084  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 20:58:57.558419  188133 kubeadm.go:392] StartCluster: {Name:no-preload-916885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-916885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.239 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:58:57.558501  188133 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:58:57.558537  188133 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:58:57.600004  188133 cri.go:89] found id: ""
	I0731 20:58:57.600087  188133 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 20:58:57.609911  188133 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 20:58:57.609933  188133 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 20:58:57.609975  188133 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 20:58:57.619498  188133 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:58:57.621885  188133 kubeconfig.go:125] found "no-preload-916885" server: "https://192.168.72.239:8443"
	I0731 20:58:57.624838  188133 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 20:58:57.633984  188133 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.239
	I0731 20:58:57.634025  188133 kubeadm.go:1160] stopping kube-system containers ...
	I0731 20:58:57.634037  188133 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 20:58:57.634080  188133 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:58:57.672988  188133 cri.go:89] found id: ""
	I0731 20:58:57.673053  188133 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 20:58:57.689149  188133 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:58:57.698520  188133 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:58:57.698541  188133 kubeadm.go:157] found existing configuration files:
	
	I0731 20:58:57.698595  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 20:58:57.707106  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:58:57.707163  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:58:57.715878  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 20:58:57.724169  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:58:57.724219  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:58:57.732890  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 20:58:57.741121  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:58:57.741174  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:58:57.749776  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 20:58:57.758063  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:58:57.758114  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:58:57.766815  188133 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 20:58:57.775595  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:58:57.883689  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:58:58.740684  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:58:58.926231  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:58:58.987089  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:58:59.049782  188133 api_server.go:52] waiting for apiserver process to appear ...
	I0731 20:58:59.049862  188133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:00.418227  188656 start.go:364] duration metric: took 3m46.480116699s to acquireMachinesLock for "old-k8s-version-239115"
	I0731 20:59:00.418294  188656 start.go:96] Skipping create...Using existing machine configuration
	I0731 20:59:00.418302  188656 fix.go:54] fixHost starting: 
	I0731 20:59:00.418738  188656 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:00.418773  188656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:00.438533  188656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37127
	I0731 20:59:00.438963  188656 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:00.439499  188656 main.go:141] libmachine: Using API Version  1
	I0731 20:59:00.439524  188656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:00.439930  188656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:00.441449  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:00.441651  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetState
	I0731 20:59:00.443465  188656 fix.go:112] recreateIfNeeded on old-k8s-version-239115: state=Stopped err=<nil>
	I0731 20:59:00.443505  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	W0731 20:59:00.443679  188656 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 20:59:00.445840  188656 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-239115" ...
	I0731 20:58:55.953940  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:55.954393  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:55.954422  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:55.954356  189429 retry.go:31] will retry after 3.068212527s: waiting for machine to come up
	I0731 20:58:59.025966  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.026388  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has current primary IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.026406  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Found IP for machine: 192.168.50.221
	I0731 20:58:59.026417  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Reserving static IP address...
	I0731 20:58:59.026867  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Reserved static IP address: 192.168.50.221
	I0731 20:58:59.026918  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-125614", mac: "52:54:00:c8:c7:f0", ip: "192.168.50.221"} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.026933  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for SSH to be available...
	I0731 20:58:59.026954  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | skip adding static IP to network mk-default-k8s-diff-port-125614 - found existing host DHCP lease matching {name: "default-k8s-diff-port-125614", mac: "52:54:00:c8:c7:f0", ip: "192.168.50.221"}
	I0731 20:58:59.026972  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Getting to WaitForSSH function...
	I0731 20:58:59.029330  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.029685  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.029720  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.029820  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Using SSH client type: external
	I0731 20:58:59.029846  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa (-rw-------)
	I0731 20:58:59.029877  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.221 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:58:59.029894  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | About to run SSH command:
	I0731 20:58:59.029906  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | exit 0
	I0731 20:58:59.161209  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | SSH cmd err, output: <nil>: 
	I0731 20:58:59.161713  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetConfigRaw
	I0731 20:58:59.162331  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetIP
	I0731 20:58:59.164645  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.164953  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.164986  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.165269  188266 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/config.json ...
	I0731 20:58:59.165479  188266 machine.go:94] provisionDockerMachine start ...
	I0731 20:58:59.165503  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:58:59.165692  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.167796  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.168065  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.168110  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.168247  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:58:59.168408  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.168626  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.168763  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:58:59.168901  188266 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:59.169103  188266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0731 20:58:59.169115  188266 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 20:58:59.281875  188266 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 20:58:59.281901  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetMachineName
	I0731 20:58:59.282185  188266 buildroot.go:166] provisioning hostname "default-k8s-diff-port-125614"
	I0731 20:58:59.282218  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetMachineName
	I0731 20:58:59.282460  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.284970  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.285461  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.285498  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.285612  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:58:59.285814  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.286004  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.286139  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:58:59.286278  188266 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:59.286445  188266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0731 20:58:59.286460  188266 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-125614 && echo "default-k8s-diff-port-125614" | sudo tee /etc/hostname
	I0731 20:58:59.411873  188266 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-125614
	
	I0731 20:58:59.411904  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.414733  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.415065  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.415099  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.415274  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:58:59.415463  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.415604  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.415751  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:58:59.415898  188266 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:59.416074  188266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0731 20:58:59.416090  188266 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-125614' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-125614/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-125614' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:58:59.539168  188266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:58:59.539210  188266 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 20:58:59.539247  188266 buildroot.go:174] setting up certificates
	I0731 20:58:59.539256  188266 provision.go:84] configureAuth start
	I0731 20:58:59.539267  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetMachineName
	I0731 20:58:59.539595  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetIP
	I0731 20:58:59.542447  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.542887  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.542916  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.543103  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.545597  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.545972  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.545992  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.546128  188266 provision.go:143] copyHostCerts
	I0731 20:58:59.546195  188266 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 20:58:59.546206  188266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 20:58:59.546265  188266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 20:58:59.546366  188266 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 20:58:59.546377  188266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 20:58:59.546407  188266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 20:58:59.546488  188266 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 20:58:59.546492  188266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 20:58:59.546517  188266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 20:58:59.546565  188266 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-125614 san=[127.0.0.1 192.168.50.221 default-k8s-diff-port-125614 localhost minikube]
	I0731 20:58:59.690753  188266 provision.go:177] copyRemoteCerts
	I0731 20:58:59.690811  188266 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:58:59.690839  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.693800  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.694141  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.694175  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.694383  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:58:59.694583  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.694748  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:58:59.694884  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:58:59.783710  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:58:59.814512  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0731 20:58:59.843492  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 20:58:59.867793  188266 provision.go:87] duration metric: took 328.521723ms to configureAuth
	I0731 20:58:59.867840  188266 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:58:59.868013  188266 config.go:182] Loaded profile config "default-k8s-diff-port-125614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:58:59.868089  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.871214  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.871655  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.871684  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.871875  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:58:59.872127  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.872309  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.872503  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:58:59.872687  188266 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:59.872909  188266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0731 20:58:59.872935  188266 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:59:00.165458  188266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:59:00.165492  188266 machine.go:97] duration metric: took 999.996831ms to provisionDockerMachine
	I0731 20:59:00.165509  188266 start.go:293] postStartSetup for "default-k8s-diff-port-125614" (driver="kvm2")
	I0731 20:59:00.165527  188266 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:59:00.165549  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:00.165936  188266 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:59:00.165973  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:00.168477  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.168837  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:00.168864  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.168991  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:00.169203  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:00.169387  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:00.169575  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:00.262132  188266 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:59:00.266596  188266 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:59:00.266621  188266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 20:59:00.266695  188266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 20:59:00.266789  188266 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 20:59:00.266909  188266 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:59:00.276407  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:00.300017  188266 start.go:296] duration metric: took 134.490488ms for postStartSetup
	I0731 20:59:00.300061  188266 fix.go:56] duration metric: took 19.289494966s for fixHost
	I0731 20:59:00.300087  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:00.302714  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.303073  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:00.303106  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.303249  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:00.303448  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:00.303633  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:00.303786  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:00.303978  188266 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:00.304204  188266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0731 20:59:00.304217  188266 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:59:00.418073  188266 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722459540.389901096
	
	I0731 20:59:00.418096  188266 fix.go:216] guest clock: 1722459540.389901096
	I0731 20:59:00.418105  188266 fix.go:229] Guest: 2024-07-31 20:59:00.389901096 +0000 UTC Remote: 2024-07-31 20:59:00.30006642 +0000 UTC m=+284.542031804 (delta=89.834676ms)
	I0731 20:59:00.418130  188266 fix.go:200] guest clock delta is within tolerance: 89.834676ms
	I0731 20:59:00.418138  188266 start.go:83] releasing machines lock for "default-k8s-diff-port-125614", held for 19.407605953s
	I0731 20:59:00.418167  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:00.418669  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetIP
	I0731 20:59:00.421683  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.422050  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:00.422090  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.422234  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:00.422799  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:00.422999  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:00.423061  188266 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:59:00.423119  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:00.423354  188266 ssh_runner.go:195] Run: cat /version.json
	I0731 20:59:00.423378  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:00.426188  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.426362  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.426603  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:00.426631  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.426790  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:00.426882  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:00.426929  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.427019  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:00.427197  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:00.427208  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:00.427363  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:00.427380  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:00.427523  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:00.427668  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:00.511834  188266 ssh_runner.go:195] Run: systemctl --version
	I0731 20:59:00.536649  188266 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:59:00.692463  188266 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:59:00.700344  188266 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:59:00.700413  188266 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:59:00.721837  188266 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:59:00.721863  188266 start.go:495] detecting cgroup driver to use...
	I0731 20:59:00.721940  188266 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:59:00.742477  188266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:59:00.760049  188266 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:59:00.760120  188266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:59:00.777823  188266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:59:00.791680  188266 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:59:00.908094  188266 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:59:01.051284  188266 docker.go:233] disabling docker service ...
	I0731 20:59:01.051379  188266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:59:01.070927  188266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:59:01.083393  188266 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:59:01.223186  188266 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:59:01.355265  188266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:59:01.369810  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:59:01.390523  188266 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 20:59:01.390588  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.401241  188266 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:59:01.401308  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.412049  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.422145  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.432523  188266 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:59:01.442640  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.456933  188266 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.475628  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.486226  188266 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:59:01.496757  188266 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:59:01.496813  188266 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:59:01.510264  188266 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:59:01.520231  188266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:01.636451  188266 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:59:01.784134  188266 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:59:01.784220  188266 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:59:01.788836  188266 start.go:563] Will wait 60s for crictl version
	I0731 20:59:01.788895  188266 ssh_runner.go:195] Run: which crictl
	I0731 20:59:01.793059  188266 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:59:01.840110  188266 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:59:01.840200  188266 ssh_runner.go:195] Run: crio --version
	I0731 20:59:01.868816  188266 ssh_runner.go:195] Run: crio --version
	I0731 20:59:01.908539  188266 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 20:59:00.447208  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .Start
	I0731 20:59:00.447389  188656 main.go:141] libmachine: (old-k8s-version-239115) Ensuring networks are active...
	I0731 20:59:00.448116  188656 main.go:141] libmachine: (old-k8s-version-239115) Ensuring network default is active
	I0731 20:59:00.448589  188656 main.go:141] libmachine: (old-k8s-version-239115) Ensuring network mk-old-k8s-version-239115 is active
	I0731 20:59:00.448892  188656 main.go:141] libmachine: (old-k8s-version-239115) Getting domain xml...
	I0731 20:59:00.450110  188656 main.go:141] libmachine: (old-k8s-version-239115) Creating domain...
	I0731 20:59:01.823554  188656 main.go:141] libmachine: (old-k8s-version-239115) Waiting to get IP...
	I0731 20:59:01.824648  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:01.825111  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:01.825172  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:01.825080  189574 retry.go:31] will retry after 241.700507ms: waiting for machine to come up
	I0731 20:59:02.068913  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:02.069608  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:02.069738  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:02.069663  189574 retry.go:31] will retry after 258.921821ms: waiting for machine to come up
	I0731 20:59:02.330231  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:02.330846  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:02.330877  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:02.330776  189574 retry.go:31] will retry after 460.911793ms: waiting for machine to come up
	I0731 20:59:02.793718  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:02.794177  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:02.794206  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:02.794156  189574 retry.go:31] will retry after 380.241989ms: waiting for machine to come up
	I0731 20:59:03.175918  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:03.176761  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:03.176786  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:03.176670  189574 retry.go:31] will retry after 631.876736ms: waiting for machine to come up
	I0731 20:59:03.810803  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:03.811478  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:03.811503  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:03.811366  189574 retry.go:31] will retry after 583.328017ms: waiting for machine to come up
	I0731 20:58:59.550347  188133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:00.050077  188133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:00.066942  188133 api_server.go:72] duration metric: took 1.017157745s to wait for apiserver process to appear ...
	I0731 20:59:00.066991  188133 api_server.go:88] waiting for apiserver healthz status ...
	I0731 20:59:00.067016  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:00.067685  188133 api_server.go:269] stopped: https://192.168.72.239:8443/healthz: Get "https://192.168.72.239:8443/healthz": dial tcp 192.168.72.239:8443: connect: connection refused
	I0731 20:59:00.567237  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:03.555694  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:03.555739  188133 api_server.go:103] status: https://192.168.72.239:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:03.555756  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:03.606602  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:03.606641  188133 api_server.go:103] status: https://192.168.72.239:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:03.606657  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:03.617900  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:03.617935  188133 api_server.go:103] status: https://192.168.72.239:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:04.067724  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:04.073838  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:04.073875  188133 api_server.go:103] status: https://192.168.72.239:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:04.568116  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:04.575013  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:04.575044  188133 api_server.go:103] status: https://192.168.72.239:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:05.067154  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:05.073314  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 200:
	ok
	I0731 20:59:05.083559  188133 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 20:59:05.083595  188133 api_server.go:131] duration metric: took 5.016595337s to wait for apiserver health ...
	I0731 20:59:05.083606  188133 cni.go:84] Creating CNI manager for ""
	I0731 20:59:05.083614  188133 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:05.085564  188133 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 20:59:01.910091  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetIP
	I0731 20:59:01.913322  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:01.913714  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:01.913747  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:01.914046  188266 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0731 20:59:01.918504  188266 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:01.930599  188266 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-125614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-125614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:59:01.930756  188266 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:59:01.930826  188266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:01.968796  188266 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 20:59:01.968882  188266 ssh_runner.go:195] Run: which lz4
	I0731 20:59:01.974123  188266 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 20:59:01.979542  188266 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 20:59:01.979575  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 20:59:03.529579  188266 crio.go:462] duration metric: took 1.555502358s to copy over tarball
	I0731 20:59:03.529662  188266 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 20:59:04.395886  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:04.396400  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:04.396664  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:04.396347  189574 retry.go:31] will retry after 1.154504022s: waiting for machine to come up
	I0731 20:59:05.552240  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:05.552879  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:05.552901  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:05.552831  189574 retry.go:31] will retry after 1.037365333s: waiting for machine to come up
	I0731 20:59:06.591875  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:06.592416  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:06.592450  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:06.592329  189574 retry.go:31] will retry after 1.249444079s: waiting for machine to come up
	I0731 20:59:07.843058  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:07.843436  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:07.843463  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:07.843370  189574 retry.go:31] will retry after 1.700521776s: waiting for machine to come up
	I0731 20:59:05.087080  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 20:59:05.105303  188133 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 20:59:05.125019  188133 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 20:59:05.136768  188133 system_pods.go:59] 8 kube-system pods found
	I0731 20:59:05.136823  188133 system_pods.go:61] "coredns-5cfdc65f69-c9gcf" [3b9458d3-81d0-4138-8a6a-81f087c3ed02] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 20:59:05.136836  188133 system_pods.go:61] "etcd-no-preload-916885" [aa31006d-8e74-48c2-9b5d-5604b3a1c283] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 20:59:05.136847  188133 system_pods.go:61] "kube-apiserver-no-preload-916885" [64549ba0-8e30-41d3-82eb-cdb729623a9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 20:59:05.136856  188133 system_pods.go:61] "kube-controller-manager-no-preload-916885" [2620c741-c27a-4df5-8555-58767d43c675] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 20:59:05.136866  188133 system_pods.go:61] "kube-proxy-99jgm" [0060c1a0-badc-401c-a4dc-5cfaa958654e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 20:59:05.136880  188133 system_pods.go:61] "kube-scheduler-no-preload-916885" [f02a0a1d-5cbb-4ee3-a084-21710667565e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 20:59:05.136894  188133 system_pods.go:61] "metrics-server-78fcd8795b-jrzgg" [acbe48be-32e9-44f8-9bf2-52e0e92a09e0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 20:59:05.136912  188133 system_pods.go:61] "storage-provisioner" [d0f902cd-d1db-4c70-bdad-34bda917cec1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 20:59:05.136926  188133 system_pods.go:74] duration metric: took 11.882384ms to wait for pod list to return data ...
	I0731 20:59:05.136937  188133 node_conditions.go:102] verifying NodePressure condition ...
	I0731 20:59:05.142117  188133 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 20:59:05.142149  188133 node_conditions.go:123] node cpu capacity is 2
	I0731 20:59:05.142165  188133 node_conditions.go:105] duration metric: took 5.221098ms to run NodePressure ...
	I0731 20:59:05.142187  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:05.534597  188133 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 20:59:05.539583  188133 kubeadm.go:739] kubelet initialised
	I0731 20:59:05.539604  188133 kubeadm.go:740] duration metric: took 4.980297ms waiting for restarted kubelet to initialise ...
	I0731 20:59:05.539626  188133 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:59:05.544498  188133 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-c9gcf" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:07.778624  188133 pod_ready.go:102] pod "coredns-5cfdc65f69-c9gcf" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:06.024682  188266 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.494984583s)
	I0731 20:59:06.024718  188266 crio.go:469] duration metric: took 2.495107603s to extract the tarball
	I0731 20:59:06.024729  188266 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 20:59:06.062675  188266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:06.107619  188266 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 20:59:06.107649  188266 cache_images.go:84] Images are preloaded, skipping loading
	I0731 20:59:06.107667  188266 kubeadm.go:934] updating node { 192.168.50.221 8444 v1.30.3 crio true true} ...
	I0731 20:59:06.107792  188266 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-125614 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-125614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:59:06.107863  188266 ssh_runner.go:195] Run: crio config
	I0731 20:59:06.173983  188266 cni.go:84] Creating CNI manager for ""
	I0731 20:59:06.174007  188266 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:06.174019  188266 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:59:06.174040  188266 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.221 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-125614 NodeName:default-k8s-diff-port-125614 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 20:59:06.174168  188266 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.221
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-125614"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.221
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.221"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:59:06.174233  188266 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 20:59:06.185059  188266 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:59:06.185189  188266 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 20:59:06.196571  188266 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0731 20:59:06.218964  188266 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:59:06.239033  188266 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0731 20:59:06.260519  188266 ssh_runner.go:195] Run: grep 192.168.50.221	control-plane.minikube.internal$ /etc/hosts
	I0731 20:59:06.264718  188266 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.221	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:06.278173  188266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:06.423941  188266 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:59:06.441663  188266 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614 for IP: 192.168.50.221
	I0731 20:59:06.441689  188266 certs.go:194] generating shared ca certs ...
	I0731 20:59:06.441711  188266 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:06.441906  188266 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 20:59:06.441965  188266 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 20:59:06.441978  188266 certs.go:256] generating profile certs ...
	I0731 20:59:06.442080  188266 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/client.key
	I0731 20:59:06.442157  188266 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/apiserver.key.9cb12361
	I0731 20:59:06.442205  188266 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/proxy-client.key
	I0731 20:59:06.442354  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 20:59:06.442391  188266 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 20:59:06.442404  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 20:59:06.442447  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:59:06.442478  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:59:06.442522  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 20:59:06.442580  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:06.443470  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:59:06.497056  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:59:06.530978  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:59:06.574533  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 20:59:06.619523  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0731 20:59:06.648269  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 20:59:06.677824  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:59:06.704450  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 20:59:06.731606  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 20:59:06.756990  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 20:59:06.781214  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:59:06.804855  188266 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:59:06.821531  188266 ssh_runner.go:195] Run: openssl version
	I0731 20:59:06.827394  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 20:59:06.838680  188266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 20:59:06.843618  188266 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 20:59:06.843681  188266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 20:59:06.850238  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:59:06.865533  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:59:06.881516  188266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:06.886809  188266 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:06.886876  188266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:06.893345  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:59:06.908919  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 20:59:06.922150  188266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 20:59:06.927165  188266 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 20:59:06.927226  188266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 20:59:06.933724  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 20:59:06.946420  188266 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:59:06.951347  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 20:59:06.959595  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 20:59:06.967808  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 20:59:06.977083  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 20:59:06.985089  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 20:59:06.992190  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 20:59:06.998458  188266 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-125614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-125614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:59:06.998548  188266 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:59:06.998592  188266 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:07.053176  188266 cri.go:89] found id: ""
	I0731 20:59:07.053256  188266 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 20:59:07.064373  188266 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 20:59:07.064392  188266 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 20:59:07.064433  188266 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 20:59:07.075167  188266 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:59:07.076057  188266 kubeconfig.go:125] found "default-k8s-diff-port-125614" server: "https://192.168.50.221:8444"
	I0731 20:59:07.078091  188266 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 20:59:07.089136  188266 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.221
	I0731 20:59:07.089161  188266 kubeadm.go:1160] stopping kube-system containers ...
	I0731 20:59:07.089174  188266 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 20:59:07.089225  188266 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:07.133015  188266 cri.go:89] found id: ""
	I0731 20:59:07.133099  188266 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 20:59:07.155229  188266 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:59:07.166326  188266 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:59:07.166348  188266 kubeadm.go:157] found existing configuration files:
	
	I0731 20:59:07.166418  188266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0731 20:59:07.176709  188266 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:59:07.176768  188266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:59:07.187232  188266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0731 20:59:07.197376  188266 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:59:07.197453  188266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:59:07.209451  188266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0731 20:59:07.221141  188266 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:59:07.221205  188266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:59:07.232016  188266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0731 20:59:07.242340  188266 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:59:07.242402  188266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:59:07.253794  188266 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 20:59:07.264912  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:07.382193  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:08.445321  188266 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.063086935s)
	I0731 20:59:08.445364  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:08.664603  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:08.744053  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:08.857284  188266 api_server.go:52] waiting for apiserver process to appear ...
	I0731 20:59:08.857380  188266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:09.357505  188266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:09.857488  188266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:09.887329  188266 api_server.go:72] duration metric: took 1.030046485s to wait for apiserver process to appear ...
	I0731 20:59:09.887358  188266 api_server.go:88] waiting for apiserver healthz status ...
	I0731 20:59:09.887405  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:09.887966  188266 api_server.go:269] stopped: https://192.168.50.221:8444/healthz: Get "https://192.168.50.221:8444/healthz": dial tcp 192.168.50.221:8444: connect: connection refused
	I0731 20:59:10.387674  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:09.545937  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:09.546581  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:09.546605  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:09.546529  189574 retry.go:31] will retry after 1.934269586s: waiting for machine to come up
	I0731 20:59:11.482402  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:11.482794  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:11.482823  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:11.482744  189574 retry.go:31] will retry after 2.575131422s: waiting for machine to come up
	I0731 20:59:10.053236  188133 pod_ready.go:102] pod "coredns-5cfdc65f69-c9gcf" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:10.551437  188133 pod_ready.go:92] pod "coredns-5cfdc65f69-c9gcf" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:10.551467  188133 pod_ready.go:81] duration metric: took 5.006944467s for pod "coredns-5cfdc65f69-c9gcf" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:10.551480  188133 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:12.559346  188133 pod_ready.go:102] pod "etcd-no-preload-916885" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:12.827297  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:12.827342  188266 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:12.827390  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:12.883496  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:12.883538  188266 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:12.887715  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:12.902715  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:12.902746  188266 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:13.388340  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:13.392840  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:13.392872  188266 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:13.888510  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:13.894519  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:13.894553  188266 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:14.388177  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:14.392557  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 200:
	ok
	I0731 20:59:14.399285  188266 api_server.go:141] control plane version: v1.30.3
	I0731 20:59:14.399321  188266 api_server.go:131] duration metric: took 4.511955505s to wait for apiserver health ...
	I0731 20:59:14.399333  188266 cni.go:84] Creating CNI manager for ""
	I0731 20:59:14.399340  188266 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:14.400987  188266 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 20:59:14.401981  188266 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 20:59:14.420648  188266 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 20:59:14.441909  188266 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 20:59:14.451365  188266 system_pods.go:59] 8 kube-system pods found
	I0731 20:59:14.451406  188266 system_pods.go:61] "coredns-7db6d8ff4d-gnrgs" [203ddf96-11cf-4fd3-8920-aa787815ad1a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 20:59:14.451419  188266 system_pods.go:61] "etcd-default-k8s-diff-port-125614" [7b9a74f5-b8df-457d-b4be-26e3f268e74a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 20:59:14.451426  188266 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-125614" [a2db9a6a-1d7f-4bb6-9f54-2d74676bba7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 20:59:14.451432  188266 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-125614" [3cd52038-e450-46ea-91a7-494bdfeb386e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 20:59:14.451438  188266 system_pods.go:61] "kube-proxy-csdc4" [24077c7d-f54c-4a54-9791-742327f2a9d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 20:59:14.451444  188266 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-125614" [4b9b4f87-74a3-4768-b3ca-3226a4db7105] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 20:59:14.451461  188266 system_pods.go:61] "metrics-server-569cc877fc-jf52w" [00b07830-8180-43c0-83c7-e68d399ae0ef] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 20:59:14.451468  188266 system_pods.go:61] "storage-provisioner" [efc60c19-af1b-426e-82e2-5fb9a2d1fb3a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 20:59:14.451476  188266 system_pods.go:74] duration metric: took 9.546534ms to wait for pod list to return data ...
	I0731 20:59:14.451486  188266 node_conditions.go:102] verifying NodePressure condition ...
	I0731 20:59:14.454760  188266 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 20:59:14.454784  188266 node_conditions.go:123] node cpu capacity is 2
	I0731 20:59:14.454795  188266 node_conditions.go:105] duration metric: took 3.303087ms to run NodePressure ...
	I0731 20:59:14.454820  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:14.730635  188266 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 20:59:14.735144  188266 kubeadm.go:739] kubelet initialised
	I0731 20:59:14.735165  188266 kubeadm.go:740] duration metric: took 4.500388ms waiting for restarted kubelet to initialise ...
	I0731 20:59:14.735173  188266 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:59:14.742292  188266 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:14.749460  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.749486  188266 pod_ready.go:81] duration metric: took 7.166399ms for pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:14.749496  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.749504  188266 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:14.757068  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.757091  188266 pod_ready.go:81] duration metric: took 7.579526ms for pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:14.757101  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.757109  188266 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:14.762181  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.762203  188266 pod_ready.go:81] duration metric: took 5.083756ms for pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:14.762213  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.762219  188266 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:14.845070  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.845095  188266 pod_ready.go:81] duration metric: took 82.86894ms for pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:14.845107  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.845113  188266 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-csdc4" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:15.246100  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "kube-proxy-csdc4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:15.246131  188266 pod_ready.go:81] duration metric: took 401.011321ms for pod "kube-proxy-csdc4" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:15.246150  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "kube-proxy-csdc4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:15.246159  188266 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:15.645657  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:15.645689  188266 pod_ready.go:81] duration metric: took 399.519543ms for pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:15.645704  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:15.645713  188266 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:16.045744  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:16.045776  188266 pod_ready.go:81] duration metric: took 400.053102ms for pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:16.045791  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:16.045800  188266 pod_ready.go:38] duration metric: took 1.310615323s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:59:16.045838  188266 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 20:59:16.059046  188266 ops.go:34] apiserver oom_adj: -16
	I0731 20:59:16.059071  188266 kubeadm.go:597] duration metric: took 8.994671774s to restartPrimaryControlPlane
	I0731 20:59:16.059082  188266 kubeadm.go:394] duration metric: took 9.060633072s to StartCluster
	I0731 20:59:16.059104  188266 settings.go:142] acquiring lock: {Name:mk1f43113b3e416d694056e4890ac819b020378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:16.059181  188266 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 20:59:16.060895  188266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:16.061143  188266 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 20:59:16.061226  188266 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 20:59:16.061324  188266 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-125614"
	I0731 20:59:16.061386  188266 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-125614"
	W0731 20:59:16.061399  188266 addons.go:243] addon storage-provisioner should already be in state true
	I0731 20:59:16.061388  188266 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-125614"
	I0731 20:59:16.061400  188266 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-125614"
	I0731 20:59:16.061453  188266 config.go:182] Loaded profile config "default-k8s-diff-port-125614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:59:16.061495  188266 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-125614"
	W0731 20:59:16.061516  188266 addons.go:243] addon metrics-server should already be in state true
	I0731 20:59:16.061438  188266 host.go:66] Checking if "default-k8s-diff-port-125614" exists ...
	I0731 20:59:16.061603  188266 host.go:66] Checking if "default-k8s-diff-port-125614" exists ...
	I0731 20:59:16.061436  188266 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-125614"
	I0731 20:59:16.062072  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.062084  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.062085  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.062110  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.062127  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.062188  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.062822  188266 out.go:177] * Verifying Kubernetes components...
	I0731 20:59:16.064337  188266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:16.081194  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45591
	I0731 20:59:16.081208  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35871
	I0731 20:59:16.081197  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38383
	I0731 20:59:16.081872  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.081956  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.082026  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.082423  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.082439  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.082926  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.082951  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.083047  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.083058  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.083076  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.083712  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.083754  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.084871  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.085484  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.085734  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetState
	I0731 20:59:16.085815  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.085845  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.089827  188266 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-125614"
	W0731 20:59:16.089854  188266 addons.go:243] addon default-storageclass should already be in state true
	I0731 20:59:16.089884  188266 host.go:66] Checking if "default-k8s-diff-port-125614" exists ...
	I0731 20:59:16.090245  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.090301  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.106592  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38845
	I0731 20:59:16.106609  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34433
	I0731 20:59:16.108751  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.108849  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.109414  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.109442  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.109546  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.109576  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.109948  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.109953  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.110132  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetState
	I0731 20:59:16.110163  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetState
	I0731 20:59:16.111216  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36309
	I0731 20:59:16.111657  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.112217  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.112239  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.112319  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:16.113374  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.115608  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:16.115649  188266 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:59:16.115940  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.115979  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.116965  188266 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 20:59:16.117053  188266 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 20:59:16.117069  188266 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 20:59:16.117083  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:16.118247  188266 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 20:59:16.118268  188266 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 20:59:16.118288  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:16.120985  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.121540  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:16.121563  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.121764  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:16.121865  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.122099  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:16.122295  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:16.122371  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:16.122490  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.122552  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:16.122632  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:16.122850  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:16.123024  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:16.123218  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:16.133929  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34157
	I0731 20:59:16.134348  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.134844  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.134865  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.135175  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.135389  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetState
	I0731 20:59:16.136985  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:16.137272  188266 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 20:59:16.137287  188266 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 20:59:16.137313  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:16.140222  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.140543  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:16.140560  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:16.140762  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.140795  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:16.140969  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:16.141107  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:16.257677  188266 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:59:16.275791  188266 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-125614" to be "Ready" ...
	I0731 20:59:16.373528  188266 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 20:59:16.373552  188266 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 20:59:16.380797  188266 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 20:59:16.404028  188266 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 20:59:16.406072  188266 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 20:59:16.406098  188266 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 20:59:16.456003  188266 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 20:59:16.456030  188266 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 20:59:16.517304  188266 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 20:59:17.377438  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.377468  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.377514  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.377565  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.377765  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.377780  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.377790  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.377797  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.377827  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.377835  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.377930  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.378028  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.378028  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Closing plugin on server side
	I0731 20:59:17.378354  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Closing plugin on server side
	I0731 20:59:17.378417  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.378424  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.378569  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.378583  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.384110  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.384130  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.384325  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.384341  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.428457  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.428480  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.428766  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.428782  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.428790  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.428799  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.428804  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Closing plugin on server side
	I0731 20:59:17.429011  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.429024  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.429040  188266 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-125614"
	I0731 20:59:17.431884  188266 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 20:59:14.059385  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:14.059857  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:14.059879  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:14.059819  189574 retry.go:31] will retry after 3.127857327s: waiting for machine to come up
	I0731 20:59:17.189405  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:17.189871  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:17.189902  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:17.189821  189574 retry.go:31] will retry after 4.516767425s: waiting for machine to come up
	I0731 20:59:14.559493  188133 pod_ready.go:102] pod "etcd-no-preload-916885" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:16.561540  188133 pod_ready.go:92] pod "etcd-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:16.561568  188133 pod_ready.go:81] duration metric: took 6.010079286s for pod "etcd-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:16.561580  188133 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.068734  188133 pod_ready.go:92] pod "kube-apiserver-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:18.068756  188133 pod_ready.go:81] duration metric: took 1.507167128s for pod "kube-apiserver-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.068766  188133 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.073069  188133 pod_ready.go:92] pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:18.073086  188133 pod_ready.go:81] duration metric: took 4.313817ms for pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.073095  188133 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-99jgm" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.077480  188133 pod_ready.go:92] pod "kube-proxy-99jgm" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:18.077497  188133 pod_ready.go:81] duration metric: took 4.395483ms for pod "kube-proxy-99jgm" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.077506  188133 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.082197  188133 pod_ready.go:92] pod "kube-scheduler-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:18.082221  188133 pod_ready.go:81] duration metric: took 4.709042ms for pod "kube-scheduler-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.082234  188133 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:17.433072  188266 addons.go:510] duration metric: took 1.371850333s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 20:59:18.280135  188266 node_ready.go:53] node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:20.280881  188266 node_ready.go:53] node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:23.082812  187862 start.go:364] duration metric: took 58.27194035s to acquireMachinesLock for "embed-certs-831240"
	I0731 20:59:23.082866  187862 start.go:96] Skipping create...Using existing machine configuration
	I0731 20:59:23.082875  187862 fix.go:54] fixHost starting: 
	I0731 20:59:23.083267  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:23.083308  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:23.101291  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36679
	I0731 20:59:23.101826  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:23.102464  187862 main.go:141] libmachine: Using API Version  1
	I0731 20:59:23.102498  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:23.102817  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:23.103024  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:23.103187  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetState
	I0731 20:59:23.105117  187862 fix.go:112] recreateIfNeeded on embed-certs-831240: state=Stopped err=<nil>
	I0731 20:59:23.105143  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	W0731 20:59:23.105307  187862 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 20:59:23.106919  187862 out.go:177] * Restarting existing kvm2 VM for "embed-certs-831240" ...
	I0731 20:59:21.708296  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.708811  188656 main.go:141] libmachine: (old-k8s-version-239115) Found IP for machine: 192.168.61.51
	I0731 20:59:21.708846  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has current primary IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.708860  188656 main.go:141] libmachine: (old-k8s-version-239115) Reserving static IP address...
	I0731 20:59:21.709432  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "old-k8s-version-239115", mac: "52:54:00:5a:70:0d", ip: "192.168.61.51"} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.709663  188656 main.go:141] libmachine: (old-k8s-version-239115) Reserved static IP address: 192.168.61.51
	I0731 20:59:21.709695  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | skip adding static IP to network mk-old-k8s-version-239115 - found existing host DHCP lease matching {name: "old-k8s-version-239115", mac: "52:54:00:5a:70:0d", ip: "192.168.61.51"}
	I0731 20:59:21.709711  188656 main.go:141] libmachine: (old-k8s-version-239115) Waiting for SSH to be available...
	I0731 20:59:21.709723  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | Getting to WaitForSSH function...
	I0731 20:59:21.711911  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.712310  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.712345  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.712517  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | Using SSH client type: external
	I0731 20:59:21.712540  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa (-rw-------)
	I0731 20:59:21.712581  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:59:21.712598  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | About to run SSH command:
	I0731 20:59:21.712625  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | exit 0
	I0731 20:59:21.838026  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | SSH cmd err, output: <nil>: 
	I0731 20:59:21.838370  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetConfigRaw
	I0731 20:59:21.839169  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetIP
	I0731 20:59:21.842168  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.842588  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.842623  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.842866  188656 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/config.json ...
	I0731 20:59:21.843126  188656 machine.go:94] provisionDockerMachine start ...
	I0731 20:59:21.843150  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:21.843388  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:21.846148  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.846657  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.846686  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.846993  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:21.847165  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:21.847360  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:21.847530  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:21.847707  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:21.847938  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:21.847951  188656 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 20:59:21.955109  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 20:59:21.955143  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetMachineName
	I0731 20:59:21.955460  188656 buildroot.go:166] provisioning hostname "old-k8s-version-239115"
	I0731 20:59:21.955492  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetMachineName
	I0731 20:59:21.955728  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:21.958752  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.959146  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.959176  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.959395  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:21.959620  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:21.959781  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:21.959918  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:21.960078  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:21.960358  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:21.960378  188656 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-239115 && echo "old-k8s-version-239115" | sudo tee /etc/hostname
	I0731 20:59:22.090625  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-239115
	
	I0731 20:59:22.090665  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.093927  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.094356  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.094387  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.094729  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.094942  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.095153  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.095364  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.095583  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:22.095819  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:22.095845  188656 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-239115' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-239115/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-239115' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:59:22.217153  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:59:22.217189  188656 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 20:59:22.217215  188656 buildroot.go:174] setting up certificates
	I0731 20:59:22.217229  188656 provision.go:84] configureAuth start
	I0731 20:59:22.217242  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetMachineName
	I0731 20:59:22.217613  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetIP
	I0731 20:59:22.220640  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.221082  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.221125  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.221237  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.223811  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.224152  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.224180  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.224337  188656 provision.go:143] copyHostCerts
	I0731 20:59:22.224405  188656 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 20:59:22.224418  188656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 20:59:22.224485  188656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 20:59:22.224604  188656 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 20:59:22.224616  188656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 20:59:22.224654  188656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 20:59:22.224729  188656 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 20:59:22.224740  188656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 20:59:22.224766  188656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 20:59:22.224833  188656 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-239115 san=[127.0.0.1 192.168.61.51 localhost minikube old-k8s-version-239115]
	I0731 20:59:22.407532  188656 provision.go:177] copyRemoteCerts
	I0731 20:59:22.407599  188656 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:59:22.407625  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.410594  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.411007  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.411033  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.411338  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.411582  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.411811  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.412007  188656 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa Username:docker}
	I0731 20:59:22.492781  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:59:22.518278  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0731 20:59:22.543018  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 20:59:22.568888  188656 provision.go:87] duration metric: took 351.643ms to configureAuth
	I0731 20:59:22.568920  188656 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:59:22.569099  188656 config.go:182] Loaded profile config "old-k8s-version-239115": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 20:59:22.569169  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.572154  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.572471  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.572500  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.572669  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.572872  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.572993  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.573112  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.573249  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:22.573481  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:22.573512  188656 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:59:22.847156  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:59:22.847193  188656 machine.go:97] duration metric: took 1.004049055s to provisionDockerMachine
	I0731 20:59:22.847211  188656 start.go:293] postStartSetup for "old-k8s-version-239115" (driver="kvm2")
	I0731 20:59:22.847229  188656 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:59:22.847284  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:22.847710  188656 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:59:22.847741  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.850515  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.850935  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.850962  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.851088  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.851288  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.851524  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.851674  188656 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa Username:docker}
	I0731 20:59:22.932316  188656 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:59:22.936672  188656 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:59:22.936707  188656 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 20:59:22.936792  188656 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 20:59:22.936894  188656 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 20:59:22.937011  188656 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:59:22.946454  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:22.972952  188656 start.go:296] duration metric: took 125.72216ms for postStartSetup
	I0731 20:59:22.972996  188656 fix.go:56] duration metric: took 22.554695114s for fixHost
	I0731 20:59:22.973026  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.975758  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.976166  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.976198  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.976320  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.976585  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.976782  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.976966  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.977115  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:22.977275  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:22.977284  188656 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:59:23.082657  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722459563.026856067
	
	I0731 20:59:23.082683  188656 fix.go:216] guest clock: 1722459563.026856067
	I0731 20:59:23.082694  188656 fix.go:229] Guest: 2024-07-31 20:59:23.026856067 +0000 UTC Remote: 2024-07-31 20:59:22.973000729 +0000 UTC m=+249.171273714 (delta=53.855338ms)
	I0731 20:59:23.082721  188656 fix.go:200] guest clock delta is within tolerance: 53.855338ms
	I0731 20:59:23.082727  188656 start.go:83] releasing machines lock for "old-k8s-version-239115", held for 22.664459101s
	I0731 20:59:23.082752  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:23.083052  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetIP
	I0731 20:59:23.086626  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.087093  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:23.087135  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.087366  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:23.087954  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:23.088159  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:23.088251  188656 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:59:23.088303  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:23.088370  188656 ssh_runner.go:195] Run: cat /version.json
	I0731 20:59:23.088392  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:23.091710  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.091989  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.092073  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:23.092101  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.092227  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:23.092429  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:23.092472  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:23.092520  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.092618  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:23.092752  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:23.092803  188656 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa Username:docker}
	I0731 20:59:23.092931  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:23.093100  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:23.093255  188656 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa Username:docker}
	I0731 20:59:23.175012  188656 ssh_runner.go:195] Run: systemctl --version
	I0731 20:59:23.200192  188656 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:59:23.348227  188656 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:59:23.355109  188656 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:59:23.355195  188656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:59:23.371683  188656 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:59:23.371707  188656 start.go:495] detecting cgroup driver to use...
	I0731 20:59:23.371786  188656 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:59:23.388727  188656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:59:23.408830  188656 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:59:23.408907  188656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:59:23.423594  188656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:59:23.437876  188656 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:59:23.559105  188656 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:59:23.743186  188656 docker.go:233] disabling docker service ...
	I0731 20:59:23.743253  188656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:59:23.758053  188656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:59:23.779951  188656 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:59:20.089173  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:22.092138  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:23.919494  188656 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:59:24.057230  188656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:59:24.072687  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:59:24.094528  188656 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 20:59:24.094600  188656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:24.106579  188656 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:59:24.106634  188656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:24.120079  188656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:24.130759  188656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:24.142925  188656 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:59:24.154760  188656 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:59:24.165059  188656 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:59:24.165113  188656 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:59:24.179567  188656 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:59:24.191838  188656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:24.339078  188656 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:59:24.515723  188656 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:59:24.515810  188656 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:59:24.521882  188656 start.go:563] Will wait 60s for crictl version
	I0731 20:59:24.521966  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:24.527655  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:59:24.581055  188656 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:59:24.581151  188656 ssh_runner.go:195] Run: crio --version
	I0731 20:59:24.623207  188656 ssh_runner.go:195] Run: crio --version
	I0731 20:59:24.662956  188656 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0731 20:59:22.780311  188266 node_ready.go:53] node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:23.281324  188266 node_ready.go:49] node "default-k8s-diff-port-125614" has status "Ready":"True"
	I0731 20:59:23.281373  188266 node_ready.go:38] duration metric: took 7.005540469s for node "default-k8s-diff-port-125614" to be "Ready" ...
	I0731 20:59:23.281387  188266 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:59:23.291207  188266 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.299173  188266 pod_ready.go:92] pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:23.299202  188266 pod_ready.go:81] duration metric: took 7.971632ms for pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.299215  188266 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.307561  188266 pod_ready.go:92] pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:23.307580  188266 pod_ready.go:81] duration metric: took 8.357239ms for pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.307589  188266 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.314466  188266 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:23.314544  188266 pod_ready.go:81] duration metric: took 6.946044ms for pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.314565  188266 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:25.323341  188266 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:23.108292  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Start
	I0731 20:59:23.108473  187862 main.go:141] libmachine: (embed-certs-831240) Ensuring networks are active...
	I0731 20:59:23.109160  187862 main.go:141] libmachine: (embed-certs-831240) Ensuring network default is active
	I0731 20:59:23.109575  187862 main.go:141] libmachine: (embed-certs-831240) Ensuring network mk-embed-certs-831240 is active
	I0731 20:59:23.110032  187862 main.go:141] libmachine: (embed-certs-831240) Getting domain xml...
	I0731 20:59:23.110762  187862 main.go:141] libmachine: (embed-certs-831240) Creating domain...
	I0731 20:59:24.457926  187862 main.go:141] libmachine: (embed-certs-831240) Waiting to get IP...
	I0731 20:59:24.458936  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:24.459381  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:24.459477  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:24.459375  189758 retry.go:31] will retry after 266.695372ms: waiting for machine to come up
	I0731 20:59:24.727938  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:24.728394  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:24.728532  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:24.728451  189758 retry.go:31] will retry after 349.84093ms: waiting for machine to come up
	I0731 20:59:25.080044  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:25.080634  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:25.080668  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:25.080592  189758 retry.go:31] will retry after 324.555122ms: waiting for machine to come up
	I0731 20:59:25.407332  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:25.407852  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:25.407877  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:25.407795  189758 retry.go:31] will retry after 580.815897ms: waiting for machine to come up
	I0731 20:59:25.990957  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:25.991551  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:25.991578  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:25.991468  189758 retry.go:31] will retry after 570.045476ms: waiting for machine to come up
	I0731 20:59:26.563493  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:26.563901  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:26.563931  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:26.563853  189758 retry.go:31] will retry after 582.597352ms: waiting for machine to come up
	I0731 20:59:27.148256  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:27.148744  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:27.148773  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:27.148688  189758 retry.go:31] will retry after 1.105713474s: waiting for machine to come up
	I0731 20:59:24.664851  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetIP
	I0731 20:59:24.668464  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:24.668842  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:24.668869  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:24.669103  188656 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0731 20:59:24.674448  188656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:24.690857  188656 kubeadm.go:883] updating cluster {Name:old-k8s-version-239115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-239115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:59:24.691011  188656 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 20:59:24.691056  188656 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:24.744259  188656 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 20:59:24.744348  188656 ssh_runner.go:195] Run: which lz4
	I0731 20:59:24.749358  188656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 20:59:24.754299  188656 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 20:59:24.754341  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0731 20:59:26.551495  188656 crio.go:462] duration metric: took 1.802206904s to copy over tarball
	I0731 20:59:26.551571  188656 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 20:59:24.589677  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:26.591079  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:29.089923  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:25.824008  188266 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:25.824037  188266 pod_ready.go:81] duration metric: took 2.509461823s for pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:25.824052  188266 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-csdc4" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:25.840569  188266 pod_ready.go:92] pod "kube-proxy-csdc4" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:25.840595  188266 pod_ready.go:81] duration metric: took 16.533543ms for pod "kube-proxy-csdc4" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:25.840613  188266 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:26.103726  188266 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:26.103759  188266 pod_ready.go:81] duration metric: took 263.1364ms for pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:26.103774  188266 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:28.112583  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:30.610462  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:28.255818  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:28.256478  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:28.256506  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:28.256408  189758 retry.go:31] will retry after 1.3552249s: waiting for machine to come up
	I0731 20:59:29.613070  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:29.613661  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:29.613693  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:29.613620  189758 retry.go:31] will retry after 1.522319436s: waiting for machine to come up
	I0731 20:59:31.138020  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:31.138490  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:31.138522  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:31.138434  189758 retry.go:31] will retry after 1.573723862s: waiting for machine to come up
	I0731 20:59:29.653941  188656 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.102337952s)
	I0731 20:59:29.653974  188656 crio.go:469] duration metric: took 3.102444338s to extract the tarball
	I0731 20:59:29.653982  188656 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 20:59:29.704065  188656 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:29.745966  188656 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 20:59:29.746010  188656 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 20:59:29.746076  188656 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:59:29.746107  188656 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:29.746129  188656 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:29.746149  188656 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0731 20:59:29.746170  188656 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0731 20:59:29.746410  188656 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:29.746423  188656 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:29.746735  188656 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:29.747951  188656 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 20:59:29.747978  188656 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0731 20:59:29.747978  188656 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:29.747998  188656 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:29.748005  188656 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:29.747951  188656 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:59:29.748021  188656 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:29.748091  188656 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:29.915865  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:29.918049  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:29.950840  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:29.952762  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:29.956317  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:29.959905  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0731 20:59:30.000707  188656 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0731 20:59:30.000768  188656 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:30.000821  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.007207  188656 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0731 20:59:30.007251  188656 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:30.007294  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.016613  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0731 20:59:30.082306  188656 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0731 20:59:30.082358  188656 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:30.082364  188656 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0731 20:59:30.082414  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.082418  188656 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:30.082557  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.089299  188656 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0731 20:59:30.089382  188656 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:30.089427  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.105150  188656 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0731 20:59:30.105217  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:30.105246  188656 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0731 20:59:30.105264  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:30.105282  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.129702  188656 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0731 20:59:30.129748  188656 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0731 20:59:30.129779  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:30.129826  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:30.129853  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:30.129800  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.188192  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 20:59:30.188243  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0731 20:59:30.188342  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0731 20:59:30.188365  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0731 20:59:30.268231  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0731 20:59:30.268296  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0731 20:59:30.268337  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0731 20:59:30.287822  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0731 20:59:30.287929  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0731 20:59:30.635440  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:59:30.776879  188656 cache_images.go:92] duration metric: took 1.030849977s to LoadCachedImages
	W0731 20:59:30.777006  188656 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0731 20:59:30.777028  188656 kubeadm.go:934] updating node { 192.168.61.51 8443 v1.20.0 crio true true} ...
	I0731 20:59:30.777175  188656 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-239115 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:59:30.777284  188656 ssh_runner.go:195] Run: crio config
	I0731 20:59:30.832542  188656 cni.go:84] Creating CNI manager for ""
	I0731 20:59:30.832570  188656 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:30.832586  188656 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:59:30.832618  188656 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-239115 NodeName:old-k8s-version-239115 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 20:59:30.832798  188656 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-239115"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:59:30.832877  188656 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0731 20:59:30.842909  188656 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:59:30.842995  188656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 20:59:30.852951  188656 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0731 20:59:30.872643  188656 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:59:30.889851  188656 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0731 20:59:30.910958  188656 ssh_runner.go:195] Run: grep 192.168.61.51	control-plane.minikube.internal$ /etc/hosts
	I0731 20:59:30.915645  188656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:30.928698  188656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:31.055628  188656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:59:31.076731  188656 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115 for IP: 192.168.61.51
	I0731 20:59:31.076759  188656 certs.go:194] generating shared ca certs ...
	I0731 20:59:31.076789  188656 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:31.076979  188656 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 20:59:31.077041  188656 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 20:59:31.077057  188656 certs.go:256] generating profile certs ...
	I0731 20:59:31.077175  188656 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/client.key
	I0731 20:59:31.077378  188656 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/apiserver.key.072d7f83
	I0731 20:59:31.077514  188656 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/proxy-client.key
	I0731 20:59:31.077704  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 20:59:31.077789  188656 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 20:59:31.077806  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 20:59:31.077854  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:59:31.077892  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:59:31.077932  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 20:59:31.077997  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:31.078906  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:59:31.126980  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:59:31.167327  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:59:31.211947  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 20:59:31.258307  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 20:59:31.296628  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 20:59:31.342330  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:59:31.391114  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 20:59:31.415097  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 20:59:31.442595  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 20:59:31.472160  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:59:31.497814  188656 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:59:31.515890  188656 ssh_runner.go:195] Run: openssl version
	I0731 20:59:31.523423  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:59:31.537984  188656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:31.544161  188656 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:31.544225  188656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:31.552590  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:59:31.567190  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 20:59:31.581206  188656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 20:59:31.586903  188656 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 20:59:31.586966  188656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 20:59:31.593485  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 20:59:31.606764  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 20:59:31.619748  188656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 20:59:31.624599  188656 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 20:59:31.624681  188656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 20:59:31.631293  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:59:31.642823  188656 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:59:31.647273  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 20:59:31.653142  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 20:59:31.659046  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 20:59:31.665552  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 20:59:31.671454  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 20:59:31.677426  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 20:59:31.683490  188656 kubeadm.go:392] StartCluster: {Name:old-k8s-version-239115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-239115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:59:31.683586  188656 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:59:31.683625  188656 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:31.725466  188656 cri.go:89] found id: ""
	I0731 20:59:31.725548  188656 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 20:59:31.737025  188656 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 20:59:31.737050  188656 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 20:59:31.737113  188656 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 20:59:31.747325  188656 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:59:31.748325  188656 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-239115" does not appear in /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 20:59:31.748965  188656 kubeconfig.go:62] /home/jenkins/minikube-integration/19355-121704/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-239115" cluster setting kubeconfig missing "old-k8s-version-239115" context setting]
	I0731 20:59:31.749997  188656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:31.757569  188656 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 20:59:31.771188  188656 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.51
	I0731 20:59:31.771222  188656 kubeadm.go:1160] stopping kube-system containers ...
	I0731 20:59:31.771236  188656 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 20:59:31.771292  188656 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:31.811574  188656 cri.go:89] found id: ""
	I0731 20:59:31.811653  188656 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 20:59:31.829930  188656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:59:31.840145  188656 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:59:31.840165  188656 kubeadm.go:157] found existing configuration files:
	
	I0731 20:59:31.840206  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 20:59:31.851266  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:59:31.851340  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:59:31.861634  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 20:59:31.871532  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:59:31.871605  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:59:31.882164  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 20:59:31.892222  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:59:31.892291  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:59:31.903299  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 20:59:31.916163  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:59:31.916235  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:59:31.929423  188656 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 20:59:31.942668  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:32.107220  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:32.953249  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:33.207806  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:33.307640  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:33.410338  188656 api_server.go:52] waiting for apiserver process to appear ...
	I0731 20:59:33.410444  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:31.221009  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:33.589275  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:32.612024  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:35.109601  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:32.713632  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:32.714137  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:32.714169  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:32.714064  189758 retry.go:31] will retry after 2.013485748s: waiting for machine to come up
	I0731 20:59:34.729625  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:34.730006  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:34.730070  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:34.729970  189758 retry.go:31] will retry after 2.193072749s: waiting for machine to come up
	I0731 20:59:36.924345  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:36.924990  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:36.925008  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:36.924940  189758 retry.go:31] will retry after 3.394781674s: waiting for machine to come up
	I0731 20:59:33.910958  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:34.411011  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:34.911110  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:35.410715  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:35.911117  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:36.410825  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:36.911311  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:37.410757  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:37.910786  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:38.410821  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:36.089622  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:38.589435  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:37.110446  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:39.111323  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:40.322463  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:40.322827  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:40.322857  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:40.322774  189758 retry.go:31] will retry after 3.836613891s: waiting for machine to come up
	I0731 20:59:38.910891  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:39.411547  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:39.911260  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:40.411404  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:40.910719  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:41.411449  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:41.910643  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:42.410967  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:42.910703  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:43.411187  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:41.088768  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:43.589256  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:41.609891  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:44.111379  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:44.160516  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.161009  187862 main.go:141] libmachine: (embed-certs-831240) Found IP for machine: 192.168.39.92
	I0731 20:59:44.161029  187862 main.go:141] libmachine: (embed-certs-831240) Reserving static IP address...
	I0731 20:59:44.161041  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has current primary IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.161561  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "embed-certs-831240", mac: "52:54:00:ff:69:a6", ip: "192.168.39.92"} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.161594  187862 main.go:141] libmachine: (embed-certs-831240) DBG | skip adding static IP to network mk-embed-certs-831240 - found existing host DHCP lease matching {name: "embed-certs-831240", mac: "52:54:00:ff:69:a6", ip: "192.168.39.92"}
	I0731 20:59:44.161609  187862 main.go:141] libmachine: (embed-certs-831240) Reserved static IP address: 192.168.39.92
	I0731 20:59:44.161623  187862 main.go:141] libmachine: (embed-certs-831240) Waiting for SSH to be available...
	I0731 20:59:44.161638  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Getting to WaitForSSH function...
	I0731 20:59:44.163936  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.164285  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.164318  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.164447  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Using SSH client type: external
	I0731 20:59:44.164479  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa (-rw-------)
	I0731 20:59:44.164499  187862 main.go:141] libmachine: (embed-certs-831240) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.92 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:59:44.164510  187862 main.go:141] libmachine: (embed-certs-831240) DBG | About to run SSH command:
	I0731 20:59:44.164544  187862 main.go:141] libmachine: (embed-certs-831240) DBG | exit 0
	I0731 20:59:44.293463  187862 main.go:141] libmachine: (embed-certs-831240) DBG | SSH cmd err, output: <nil>: 
	I0731 20:59:44.293819  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetConfigRaw
	I0731 20:59:44.294490  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetIP
	I0731 20:59:44.296982  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.297351  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.297381  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.297634  187862 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/config.json ...
	I0731 20:59:44.297877  187862 machine.go:94] provisionDockerMachine start ...
	I0731 20:59:44.297897  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:44.298116  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.300452  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.300806  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.300829  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.300953  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:44.301146  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.301308  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.301439  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:44.301634  187862 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:44.301811  187862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0731 20:59:44.301823  187862 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 20:59:44.418065  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 20:59:44.418105  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetMachineName
	I0731 20:59:44.418428  187862 buildroot.go:166] provisioning hostname "embed-certs-831240"
	I0731 20:59:44.418446  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetMachineName
	I0731 20:59:44.418666  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.421984  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.422403  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.422434  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.422568  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:44.422733  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.422893  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.423023  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:44.423208  187862 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:44.423371  187862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0731 20:59:44.423410  187862 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-831240 && echo "embed-certs-831240" | sudo tee /etc/hostname
	I0731 20:59:44.549670  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-831240
	
	I0731 20:59:44.549697  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.552503  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.552851  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.552876  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.553017  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:44.553200  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.553398  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.553533  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:44.553721  187862 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:44.554012  187862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0731 20:59:44.554039  187862 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-831240' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-831240/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-831240' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:59:44.674662  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:59:44.674693  187862 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 20:59:44.674713  187862 buildroot.go:174] setting up certificates
	I0731 20:59:44.674723  187862 provision.go:84] configureAuth start
	I0731 20:59:44.674733  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetMachineName
	I0731 20:59:44.675011  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetIP
	I0731 20:59:44.677631  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.677911  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.677951  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.678081  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.679869  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.680177  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.680205  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.680332  187862 provision.go:143] copyHostCerts
	I0731 20:59:44.680391  187862 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 20:59:44.680401  187862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 20:59:44.680450  187862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 20:59:44.680537  187862 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 20:59:44.680545  187862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 20:59:44.680564  187862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 20:59:44.680628  187862 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 20:59:44.680635  187862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 20:59:44.680652  187862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 20:59:44.680711  187862 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.embed-certs-831240 san=[127.0.0.1 192.168.39.92 embed-certs-831240 localhost minikube]
	I0731 20:59:44.733872  187862 provision.go:177] copyRemoteCerts
	I0731 20:59:44.733927  187862 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:59:44.733951  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.736399  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.736731  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.736758  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.736935  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:44.737131  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.737273  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:44.737430  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 20:59:44.824050  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:59:44.847699  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0731 20:59:44.872138  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 20:59:44.896013  187862 provision.go:87] duration metric: took 221.275458ms to configureAuth
	I0731 20:59:44.896042  187862 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:59:44.896234  187862 config.go:182] Loaded profile config "embed-certs-831240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:59:44.896327  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.898820  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.899206  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.899232  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.899457  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:44.899660  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.899822  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.899993  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:44.900216  187862 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:44.900438  187862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0731 20:59:44.900462  187862 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:59:45.179165  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:59:45.179194  187862 machine.go:97] duration metric: took 881.302407ms to provisionDockerMachine
	I0731 20:59:45.179213  187862 start.go:293] postStartSetup for "embed-certs-831240" (driver="kvm2")
	I0731 20:59:45.179226  187862 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:59:45.179252  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:45.179615  187862 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:59:45.179646  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:45.182617  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.183047  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:45.183069  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.183284  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:45.183510  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:45.183654  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:45.183805  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 20:59:45.273492  187862 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:59:45.277593  187862 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:59:45.277618  187862 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 20:59:45.277687  187862 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 20:59:45.277782  187862 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 20:59:45.277889  187862 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:59:45.288172  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:45.311763  187862 start.go:296] duration metric: took 132.534326ms for postStartSetup
	I0731 20:59:45.311803  187862 fix.go:56] duration metric: took 22.228928797s for fixHost
	I0731 20:59:45.311827  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:45.314578  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.314962  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:45.314998  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.315146  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:45.315381  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:45.315549  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:45.315681  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:45.315868  187862 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:45.316035  187862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0731 20:59:45.316045  187862 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:59:45.426289  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722459585.381297707
	
	I0731 20:59:45.426314  187862 fix.go:216] guest clock: 1722459585.381297707
	I0731 20:59:45.426324  187862 fix.go:229] Guest: 2024-07-31 20:59:45.381297707 +0000 UTC Remote: 2024-07-31 20:59:45.311808006 +0000 UTC m=+363.090091892 (delta=69.489701ms)
	I0731 20:59:45.426379  187862 fix.go:200] guest clock delta is within tolerance: 69.489701ms
	I0731 20:59:45.426387  187862 start.go:83] releasing machines lock for "embed-certs-831240", held for 22.343543995s
	I0731 20:59:45.426419  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:45.426684  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetIP
	I0731 20:59:45.429330  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.429757  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:45.429785  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.429952  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:45.430453  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:45.430671  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:45.430790  187862 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:59:45.430854  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:45.430905  187862 ssh_runner.go:195] Run: cat /version.json
	I0731 20:59:45.430943  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:45.433850  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.434108  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.434192  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:45.434222  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.434385  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:45.434580  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:45.434584  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:45.434611  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.434760  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:45.434768  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:45.434939  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:45.434929  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 20:59:45.435099  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:45.435243  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 20:59:45.542122  187862 ssh_runner.go:195] Run: systemctl --version
	I0731 20:59:45.548583  187862 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:59:45.690235  187862 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:59:45.696897  187862 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:59:45.696986  187862 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:59:45.714456  187862 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:59:45.714480  187862 start.go:495] detecting cgroup driver to use...
	I0731 20:59:45.714546  187862 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:59:45.732184  187862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:59:45.747047  187862 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:59:45.747104  187862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:59:45.761152  187862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:59:45.775267  187862 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:59:45.890891  187862 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:59:46.043503  187862 docker.go:233] disabling docker service ...
	I0731 20:59:46.043577  187862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:59:46.058174  187862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:59:46.070900  187862 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:59:46.209527  187862 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:59:46.343868  187862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:59:46.357583  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:59:46.375819  187862 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 20:59:46.375875  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.386762  187862 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:59:46.386844  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.397495  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.407654  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.418326  187862 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:59:46.428983  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.439530  187862 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.457956  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.468003  187862 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:59:46.477332  187862 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:59:46.477400  187862 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:59:46.490886  187862 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:59:46.500516  187862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:46.617952  187862 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:59:46.761978  187862 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:59:46.762088  187862 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:59:46.767210  187862 start.go:563] Will wait 60s for crictl version
	I0731 20:59:46.767275  187862 ssh_runner.go:195] Run: which crictl
	I0731 20:59:46.771502  187862 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:59:46.810894  187862 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:59:46.810976  187862 ssh_runner.go:195] Run: crio --version
	I0731 20:59:46.839234  187862 ssh_runner.go:195] Run: crio --version
	I0731 20:59:46.871209  187862 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 20:59:46.872648  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetIP
	I0731 20:59:46.875374  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:46.875683  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:46.875698  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:46.875900  187862 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 20:59:46.880402  187862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:46.894098  187862 kubeadm.go:883] updating cluster {Name:embed-certs-831240 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-831240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:59:46.894238  187862 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:59:46.894300  187862 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:46.937003  187862 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 20:59:46.937079  187862 ssh_runner.go:195] Run: which lz4
	I0731 20:59:46.941158  187862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 20:59:46.945395  187862 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 20:59:46.945425  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 20:59:43.910997  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:44.410783  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:44.911365  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:45.410690  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:45.911150  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:46.411384  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:46.910579  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:47.411171  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:47.910578  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:48.411377  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:45.589690  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:47.591464  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:46.608955  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:48.611634  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:50.615557  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:48.414703  187862 crio.go:462] duration metric: took 1.473569222s to copy over tarball
	I0731 20:59:48.414789  187862 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 20:59:50.666750  187862 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.251926888s)
	I0731 20:59:50.666783  187862 crio.go:469] duration metric: took 2.252043688s to extract the tarball
	I0731 20:59:50.666793  187862 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 20:59:50.707188  187862 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:50.749781  187862 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 20:59:50.749808  187862 cache_images.go:84] Images are preloaded, skipping loading
	I0731 20:59:50.749817  187862 kubeadm.go:934] updating node { 192.168.39.92 8443 v1.30.3 crio true true} ...
	I0731 20:59:50.749923  187862 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-831240 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-831240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:59:50.749998  187862 ssh_runner.go:195] Run: crio config
	I0731 20:59:50.797191  187862 cni.go:84] Creating CNI manager for ""
	I0731 20:59:50.797214  187862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:50.797227  187862 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:59:50.797253  187862 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.92 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-831240 NodeName:embed-certs-831240 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.92"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.92 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 20:59:50.797484  187862 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.92
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-831240"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.92
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.92"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:59:50.797556  187862 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 20:59:50.808170  187862 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:59:50.808236  187862 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 20:59:50.817847  187862 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0731 20:59:50.834107  187862 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:59:50.849722  187862 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0731 20:59:50.866599  187862 ssh_runner.go:195] Run: grep 192.168.39.92	control-plane.minikube.internal$ /etc/hosts
	I0731 20:59:50.870727  187862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.92	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:50.884490  187862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:51.043488  187862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:59:51.064792  187862 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240 for IP: 192.168.39.92
	I0731 20:59:51.064816  187862 certs.go:194] generating shared ca certs ...
	I0731 20:59:51.064836  187862 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:51.065142  187862 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 20:59:51.065225  187862 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 20:59:51.065254  187862 certs.go:256] generating profile certs ...
	I0731 20:59:51.065443  187862 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/client.key
	I0731 20:59:51.065571  187862 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/apiserver.key.4e545c52
	I0731 20:59:51.065639  187862 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/proxy-client.key
	I0731 20:59:51.065798  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 20:59:51.065846  187862 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 20:59:51.065857  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 20:59:51.065883  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:59:51.065909  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:59:51.065929  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 20:59:51.065971  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:51.066633  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:59:51.107287  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:59:51.138745  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:59:51.176139  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 20:59:51.211344  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0731 20:59:51.241050  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 20:59:51.269307  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:59:51.293184  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 20:59:51.316745  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:59:51.343620  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 20:59:51.367293  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 20:59:51.391789  187862 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:59:51.413821  187862 ssh_runner.go:195] Run: openssl version
	I0731 20:59:51.420455  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 20:59:51.431721  187862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 20:59:51.436672  187862 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 20:59:51.436724  187862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 20:59:51.442604  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 20:59:51.453601  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 20:59:51.464109  187862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 20:59:51.468598  187862 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 20:59:51.468648  187862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 20:59:51.474333  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:59:51.484758  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:59:51.495093  187862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:51.499557  187862 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:51.499605  187862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:51.505244  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:59:51.515545  187862 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:59:51.519923  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 20:59:51.525696  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 20:59:51.531430  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 20:59:51.537082  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 20:59:51.542713  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 20:59:51.548206  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 20:59:51.553705  187862 kubeadm.go:392] StartCluster: {Name:embed-certs-831240 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-831240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:59:51.553793  187862 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:59:51.553841  187862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:51.592396  187862 cri.go:89] found id: ""
	I0731 20:59:51.592472  187862 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 20:59:51.602510  187862 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 20:59:51.602528  187862 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 20:59:51.602578  187862 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 20:59:51.612384  187862 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:59:51.613530  187862 kubeconfig.go:125] found "embed-certs-831240" server: "https://192.168.39.92:8443"
	I0731 20:59:51.615991  187862 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 20:59:51.625205  187862 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.92
	I0731 20:59:51.625239  187862 kubeadm.go:1160] stopping kube-system containers ...
	I0731 20:59:51.625253  187862 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 20:59:51.625307  187862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:51.663278  187862 cri.go:89] found id: ""
	I0731 20:59:51.663370  187862 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 20:59:51.678876  187862 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:59:51.688071  187862 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:59:51.688092  187862 kubeadm.go:157] found existing configuration files:
	
	I0731 20:59:51.688139  187862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 20:59:51.696441  187862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:59:51.696494  187862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:59:51.705310  187862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 20:59:51.713545  187862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:59:51.713599  187862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:59:51.723512  187862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 20:59:51.732304  187862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:59:51.732380  187862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:59:51.741301  187862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 20:59:51.749537  187862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:59:51.749583  187862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:59:51.758609  187862 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 20:59:51.774450  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:51.888916  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:48.910784  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:49.411137  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:49.911453  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:50.411128  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:50.911431  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:51.410483  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:51.910975  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:52.411519  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:52.911079  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:53.410802  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:50.094603  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:52.589951  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:53.424691  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:55.609675  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:52.666705  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:52.899759  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:52.975806  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:53.050422  187862 api_server.go:52] waiting for apiserver process to appear ...
	I0731 20:59:53.050493  187862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:53.551073  187862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:54.051427  187862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:54.551268  187862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:54.570361  187862 api_server.go:72] duration metric: took 1.519937245s to wait for apiserver process to appear ...
	I0731 20:59:54.570389  187862 api_server.go:88] waiting for apiserver healthz status ...
	I0731 20:59:54.570414  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:53.911405  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:54.410870  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:54.911330  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:55.411491  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:55.911380  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:56.411483  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:56.910602  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:57.411228  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:57.910486  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:58.411198  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:57.260421  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:57.260455  187862 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:57.260469  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:57.284265  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:57.284301  187862 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:57.570976  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:57.575616  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:57.575644  187862 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:58.071247  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:58.075871  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:58.075903  187862 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:58.570906  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:58.581990  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:58.582038  187862 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:59.070528  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:59.074787  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 200:
	ok
	I0731 20:59:59.081502  187862 api_server.go:141] control plane version: v1.30.3
	I0731 20:59:59.081541  187862 api_server.go:131] duration metric: took 4.511132973s to wait for apiserver health ...
	I0731 20:59:59.081552  187862 cni.go:84] Creating CNI manager for ""
	I0731 20:59:59.081561  187862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:59.083504  187862 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 20:59:55.089279  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:57.589380  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:59.084894  187862 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 20:59:59.098139  187862 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 20:59:59.118458  187862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 20:59:59.128022  187862 system_pods.go:59] 8 kube-system pods found
	I0731 20:59:59.128061  187862 system_pods.go:61] "coredns-7db6d8ff4d-2ks55" [f5ad9d76-5cdc-430e-8933-7e72a2dda95f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 20:59:59.128071  187862 system_pods.go:61] "etcd-embed-certs-831240" [5236ad06-90d9-48f1-964a-efa8f56ee8b5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 20:59:59.128082  187862 system_pods.go:61] "kube-apiserver-embed-certs-831240" [06290f48-0a7b-4d88-9e61-af4c7dde9acd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 20:59:59.128100  187862 system_pods.go:61] "kube-controller-manager-embed-certs-831240" [5d669fab-16cd-4bc6-a880-a1ecfb5d55b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 20:59:59.128113  187862 system_pods.go:61] "kube-proxy-x662j" [9ad0d8a8-94b4-4f3e-b5da-4e5585c28d21] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 20:59:59.128121  187862 system_pods.go:61] "kube-scheduler-embed-certs-831240" [cf9c5922-6468-46e7-84c1-1841f5bc3446] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 20:59:59.128134  187862 system_pods.go:61] "metrics-server-569cc877fc-slbkm" [f93f674b-1f0e-443b-ac06-9c2a5234eeea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 20:59:59.128145  187862 system_pods.go:61] "storage-provisioner" [d3d5fa24-96e8-4ab5-9887-62ff8b82f21d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 20:59:59.128156  187862 system_pods.go:74] duration metric: took 9.673815ms to wait for pod list to return data ...
	I0731 20:59:59.128168  187862 node_conditions.go:102] verifying NodePressure condition ...
	I0731 20:59:59.131825  187862 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 20:59:59.131853  187862 node_conditions.go:123] node cpu capacity is 2
	I0731 20:59:59.131865  187862 node_conditions.go:105] duration metric: took 3.691724ms to run NodePressure ...
	I0731 20:59:59.131897  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:59.494923  187862 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 20:59:59.501848  187862 kubeadm.go:739] kubelet initialised
	I0731 20:59:59.501875  187862 kubeadm.go:740] duration metric: took 6.920816ms waiting for restarted kubelet to initialise ...
	I0731 20:59:59.501885  187862 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:59:59.510503  187862 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:59.518204  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.518234  187862 pod_ready.go:81] duration metric: took 7.702873ms for pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:59.518247  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.518263  187862 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:59.523236  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "etcd-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.523258  187862 pod_ready.go:81] duration metric: took 4.985299ms for pod "etcd-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:59.523266  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "etcd-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.523275  187862 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:59.535237  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.535256  187862 pod_ready.go:81] duration metric: took 11.97449ms for pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:59.535270  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.535275  187862 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:59.541512  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.541531  187862 pod_ready.go:81] duration metric: took 6.24797ms for pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:59.541539  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.541545  187862 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-x662j" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:59.922722  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "kube-proxy-x662j" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.922757  187862 pod_ready.go:81] duration metric: took 381.203526ms for pod "kube-proxy-x662j" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:59.922771  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "kube-proxy-x662j" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.922779  187862 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:00.322049  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:00.322077  187862 pod_ready.go:81] duration metric: took 399.289505ms for pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	E0731 21:00:00.322088  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:00.322094  187862 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:00.722961  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:00.722993  187862 pod_ready.go:81] duration metric: took 400.88956ms for pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace to be "Ready" ...
	E0731 21:00:00.723008  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:00.723017  187862 pod_ready.go:38] duration metric: took 1.221112347s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:00:00.723050  187862 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:00:00.735642  187862 ops.go:34] apiserver oom_adj: -16
	I0731 21:00:00.735697  187862 kubeadm.go:597] duration metric: took 9.133136671s to restartPrimaryControlPlane
	I0731 21:00:00.735735  187862 kubeadm.go:394] duration metric: took 9.182030801s to StartCluster
	I0731 21:00:00.735764  187862 settings.go:142] acquiring lock: {Name:mk1f43113b3e416d694056e4890ac819b020378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:00:00.735860  187862 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 21:00:00.737955  187862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:00:00.738247  187862 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:00:00.738329  187862 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:00:00.738418  187862 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-831240"
	I0731 21:00:00.738432  187862 addons.go:69] Setting default-storageclass=true in profile "embed-certs-831240"
	I0731 21:00:00.738463  187862 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-831240"
	W0731 21:00:00.738475  187862 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:00:00.738481  187862 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-831240"
	I0731 21:00:00.738513  187862 host.go:66] Checking if "embed-certs-831240" exists ...
	I0731 21:00:00.738547  187862 config.go:182] Loaded profile config "embed-certs-831240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:00:00.738581  187862 addons.go:69] Setting metrics-server=true in profile "embed-certs-831240"
	I0731 21:00:00.738651  187862 addons.go:234] Setting addon metrics-server=true in "embed-certs-831240"
	W0731 21:00:00.738666  187862 addons.go:243] addon metrics-server should already be in state true
	I0731 21:00:00.738735  187862 host.go:66] Checking if "embed-certs-831240" exists ...
	I0731 21:00:00.738818  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.738858  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.738897  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.738960  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.739144  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.739190  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.740244  187862 out.go:177] * Verifying Kubernetes components...
	I0731 21:00:00.746003  187862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:00:00.755735  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35867
	I0731 21:00:00.755773  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38437
	I0731 21:00:00.756268  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.756271  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.756594  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35939
	I0731 21:00:00.756820  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.756847  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.756892  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.756917  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.757069  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.757228  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.757254  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.757458  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetState
	I0731 21:00:00.757638  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.757668  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.757745  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.757774  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.758005  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.758543  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.758586  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.761553  187862 addons.go:234] Setting addon default-storageclass=true in "embed-certs-831240"
	W0731 21:00:00.761587  187862 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:00:00.761618  187862 host.go:66] Checking if "embed-certs-831240" exists ...
	I0731 21:00:00.762018  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.762070  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.775492  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42385
	I0731 21:00:00.776091  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.776712  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.776743  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.776760  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35295
	I0731 21:00:00.777245  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.777402  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.777513  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetState
	I0731 21:00:00.777920  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.777945  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.778185  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32907
	I0731 21:00:00.778393  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.778603  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetState
	I0731 21:00:00.778687  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.779223  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.779243  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.779665  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.779718  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 21:00:00.780231  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.780274  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.780612  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 21:00:00.781947  187862 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:00:00.782994  187862 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 20:59:58.110503  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:00.112109  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:00.784194  187862 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:00:00.784216  187862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:00:00.784240  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 21:00:00.784937  187862 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:00:00.784958  187862 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:00:00.784984  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 21:00:00.788544  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.788947  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 21:00:00.788970  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.789003  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.789127  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 21:00:00.789389  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 21:00:00.789521  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 21:00:00.789548  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.789571  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 21:00:00.789773  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 21:00:00.790126  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 21:00:00.790324  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 21:00:00.790502  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 21:00:00.790663  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 21:00:00.799024  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35647
	I0731 21:00:00.799718  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.800341  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.800360  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.800967  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.801258  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetState
	I0731 21:00:00.803078  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 21:00:00.803555  187862 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:00:00.803571  187862 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:00:00.803591  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 21:00:00.809363  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 21:00:00.809461  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.809492  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 21:00:00.809512  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.809680  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 21:00:00.809858  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 21:00:00.810032  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 21:00:00.933963  187862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:00:00.953572  187862 node_ready.go:35] waiting up to 6m0s for node "embed-certs-831240" to be "Ready" ...
	I0731 21:00:01.036486  187862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:00:01.040636  187862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:00:01.040658  187862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 21:00:01.063384  187862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:00:01.068645  187862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:00:01.068675  187862 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:00:01.090838  187862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:00:01.090861  187862 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:00:01.113173  187862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:00:02.099966  187862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.063427097s)
	I0731 21:00:02.100021  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.100035  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.100080  187862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.036657274s)
	I0731 21:00:02.100129  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.100146  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.100338  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.100441  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.100452  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.100461  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.100580  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.100605  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.100615  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.100623  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.100698  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.100709  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Closing plugin on server side
	I0731 21:00:02.100723  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.100866  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.100875  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Closing plugin on server side
	I0731 21:00:02.100882  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.107654  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.107688  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.107952  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.107968  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.108003  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Closing plugin on server side
	I0731 21:00:02.140031  187862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.026799248s)
	I0731 21:00:02.140100  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.140116  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.140424  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Closing plugin on server side
	I0731 21:00:02.140455  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.140470  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.140482  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.140494  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.140772  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Closing plugin on server side
	I0731 21:00:02.140800  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.140808  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.140817  187862 addons.go:475] Verifying addon metrics-server=true in "embed-certs-831240"
	I0731 21:00:02.142583  187862 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 21:00:02.143787  187862 addons.go:510] duration metric: took 1.405477731s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 20:59:58.910774  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:59.410697  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:59.911233  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:00.411170  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:00.911416  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:01.410979  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:01.911444  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:02.411537  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:02.911216  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:03.411386  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:00.089186  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:02.588315  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:02.610109  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:04.610324  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:02.958162  187862 node_ready.go:53] node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:05.458997  187862 node_ready.go:53] node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:03.910942  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:04.411505  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:04.911485  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:05.410763  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:05.910937  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:06.411216  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:06.910743  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:07.410941  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:07.910922  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:08.410593  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:04.589597  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:07.089475  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:09.090023  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:06.610390  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:09.110758  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:07.958154  187862 node_ready.go:49] node "embed-certs-831240" has status "Ready":"True"
	I0731 21:00:07.958180  187862 node_ready.go:38] duration metric: took 7.004576791s for node "embed-certs-831240" to be "Ready" ...
	I0731 21:00:07.958191  187862 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:00:07.969639  187862 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:07.974704  187862 pod_ready.go:92] pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:07.974733  187862 pod_ready.go:81] duration metric: took 5.064645ms for pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:07.974745  187862 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:09.980566  187862 pod_ready.go:102] pod "etcd-embed-certs-831240" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:10.480476  187862 pod_ready.go:92] pod "etcd-embed-certs-831240" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:10.480501  187862 pod_ready.go:81] duration metric: took 2.505748029s for pod "etcd-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:10.480511  187862 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:10.485850  187862 pod_ready.go:92] pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:10.485873  187862 pod_ready.go:81] duration metric: took 5.353478ms for pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:10.485883  187862 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:08.910788  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:09.410807  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:09.911286  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:10.411372  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:10.910748  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:11.411253  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:11.910807  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:12.411208  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:12.910887  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:13.411318  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:11.589454  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:14.090483  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:11.610842  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:14.110306  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:12.492346  187862 pod_ready.go:102] pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:13.991859  187862 pod_ready.go:92] pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:13.991884  187862 pod_ready.go:81] duration metric: took 3.505993775s for pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:13.991893  187862 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x662j" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:13.997932  187862 pod_ready.go:92] pod "kube-proxy-x662j" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:13.997961  187862 pod_ready.go:81] duration metric: took 6.060225ms for pod "kube-proxy-x662j" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:13.997974  187862 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:14.007155  187862 pod_ready.go:92] pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:14.007178  187862 pod_ready.go:81] duration metric: took 9.197289ms for pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:14.007187  187862 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:16.013417  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:13.910943  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:14.410728  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:14.911343  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:15.410545  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:15.910560  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:16.411117  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:16.910537  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:17.410761  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:17.910796  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:18.411138  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:16.589010  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:18.589215  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:16.609886  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:18.610209  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:20.611613  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:18.013504  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:20.513116  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:18.911394  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:19.411098  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:19.910629  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:20.410698  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:20.910760  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:21.410503  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:21.910582  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:22.410724  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:22.910792  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:23.410961  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:21.089938  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:23.588082  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:23.109996  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:25.110361  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:22.514254  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:24.514729  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:27.013263  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:23.910510  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:24.410725  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:24.910807  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:25.411543  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:25.911473  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:26.410494  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:26.910519  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:27.410950  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:27.911528  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:28.411350  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:25.589873  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:27.590134  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:27.612311  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:30.110116  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:29.014386  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:31.014534  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:28.911371  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:29.411269  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:29.911465  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:30.410633  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:30.911166  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:31.411184  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:31.910806  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:32.410806  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:32.911125  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:33.410942  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:33.411021  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:33.461204  188656 cri.go:89] found id: ""
	I0731 21:00:33.461232  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.461241  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:33.461249  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:33.461313  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:33.500898  188656 cri.go:89] found id: ""
	I0731 21:00:33.500927  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.500937  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:33.500944  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:33.501010  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:33.536865  188656 cri.go:89] found id: ""
	I0731 21:00:33.536889  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.536902  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:33.536908  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:33.536957  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:33.578540  188656 cri.go:89] found id: ""
	I0731 21:00:33.578570  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.578582  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:33.578595  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:33.578686  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:33.616242  188656 cri.go:89] found id: ""
	I0731 21:00:33.616266  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.616276  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:33.616283  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:33.616345  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:33.650436  188656 cri.go:89] found id: ""
	I0731 21:00:33.650468  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.650479  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:33.650487  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:33.650552  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:33.687256  188656 cri.go:89] found id: ""
	I0731 21:00:33.687288  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.687300  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:33.687308  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:33.687365  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:33.720381  188656 cri.go:89] found id: ""
	I0731 21:00:33.720428  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.720440  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:33.720453  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:33.720469  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:33.772182  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:33.772226  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:33.787323  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:33.787359  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:00:30.089778  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:32.587877  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:32.110769  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:34.610418  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:33.514142  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:36.013676  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	W0731 21:00:33.907858  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:33.907878  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:33.907892  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:33.974118  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:33.974157  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:36.513427  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:36.527531  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:36.527588  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:36.567679  188656 cri.go:89] found id: ""
	I0731 21:00:36.567706  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.567714  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:36.567726  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:36.567786  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:36.608106  188656 cri.go:89] found id: ""
	I0731 21:00:36.608134  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.608145  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:36.608153  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:36.608215  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:36.651783  188656 cri.go:89] found id: ""
	I0731 21:00:36.651815  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.651824  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:36.651830  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:36.651892  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:36.686716  188656 cri.go:89] found id: ""
	I0731 21:00:36.686743  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.686751  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:36.686758  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:36.686823  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:36.721823  188656 cri.go:89] found id: ""
	I0731 21:00:36.721857  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.721865  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:36.721871  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:36.721939  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:36.758060  188656 cri.go:89] found id: ""
	I0731 21:00:36.758093  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.758103  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:36.758112  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:36.758173  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:36.801667  188656 cri.go:89] found id: ""
	I0731 21:00:36.801694  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.801704  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:36.801712  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:36.801776  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:36.845084  188656 cri.go:89] found id: ""
	I0731 21:00:36.845113  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.845124  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:36.845137  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:36.845152  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:36.897208  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:36.897248  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:36.910716  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:36.910750  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:36.987259  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:36.987285  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:36.987304  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:37.061109  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:37.061144  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:34.589416  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:36.592841  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:39.088346  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:36.611386  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:39.111149  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:38.516701  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:41.017409  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:39.600847  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:39.615897  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:39.615957  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:39.655390  188656 cri.go:89] found id: ""
	I0731 21:00:39.655417  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.655424  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:39.655430  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:39.655502  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:39.694180  188656 cri.go:89] found id: ""
	I0731 21:00:39.694213  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.694224  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:39.694231  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:39.694300  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:39.736752  188656 cri.go:89] found id: ""
	I0731 21:00:39.736783  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.736793  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:39.736801  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:39.736860  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:39.775685  188656 cri.go:89] found id: ""
	I0731 21:00:39.775770  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.775790  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:39.775802  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:39.775871  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:39.816790  188656 cri.go:89] found id: ""
	I0731 21:00:39.816820  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.816829  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:39.816835  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:39.816886  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:39.854931  188656 cri.go:89] found id: ""
	I0731 21:00:39.854963  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.854973  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:39.854981  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:39.855045  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:39.891039  188656 cri.go:89] found id: ""
	I0731 21:00:39.891066  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.891074  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:39.891083  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:39.891136  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:39.927434  188656 cri.go:89] found id: ""
	I0731 21:00:39.927463  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.927473  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:39.927483  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:39.927496  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:39.941240  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:39.941272  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:40.017212  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:40.017233  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:40.017246  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:40.094047  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:40.094081  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:40.138940  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:40.138966  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:42.690818  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:42.704855  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:42.704931  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:42.752315  188656 cri.go:89] found id: ""
	I0731 21:00:42.752347  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.752368  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:42.752376  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:42.752445  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:42.790060  188656 cri.go:89] found id: ""
	I0731 21:00:42.790090  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.790101  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:42.790109  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:42.790220  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:42.825504  188656 cri.go:89] found id: ""
	I0731 21:00:42.825532  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.825540  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:42.825547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:42.825598  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:42.860157  188656 cri.go:89] found id: ""
	I0731 21:00:42.860193  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.860204  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:42.860213  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:42.860286  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:42.902914  188656 cri.go:89] found id: ""
	I0731 21:00:42.902947  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.902959  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:42.902967  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:42.903036  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:42.950503  188656 cri.go:89] found id: ""
	I0731 21:00:42.950532  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.950541  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:42.950550  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:42.950603  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:43.010232  188656 cri.go:89] found id: ""
	I0731 21:00:43.010261  188656 logs.go:276] 0 containers: []
	W0731 21:00:43.010272  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:43.010280  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:43.010344  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:43.045487  188656 cri.go:89] found id: ""
	I0731 21:00:43.045517  188656 logs.go:276] 0 containers: []
	W0731 21:00:43.045527  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:43.045539  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:43.045556  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:43.123248  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:43.123279  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:43.123296  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:43.212230  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:43.212272  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:43.254595  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:43.254626  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:43.306187  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:43.306227  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:41.589806  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:44.088126  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:41.611786  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:44.109436  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:43.513500  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:45.514161  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:45.820246  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:45.835707  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:45.835786  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:45.872079  188656 cri.go:89] found id: ""
	I0731 21:00:45.872110  188656 logs.go:276] 0 containers: []
	W0731 21:00:45.872122  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:45.872130  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:45.872196  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:45.910637  188656 cri.go:89] found id: ""
	I0731 21:00:45.910664  188656 logs.go:276] 0 containers: []
	W0731 21:00:45.910672  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:45.910678  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:45.910740  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:45.945316  188656 cri.go:89] found id: ""
	I0731 21:00:45.945360  188656 logs.go:276] 0 containers: []
	W0731 21:00:45.945372  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:45.945380  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:45.945455  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:45.982015  188656 cri.go:89] found id: ""
	I0731 21:00:45.982046  188656 logs.go:276] 0 containers: []
	W0731 21:00:45.982057  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:45.982096  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:45.982165  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:46.017359  188656 cri.go:89] found id: ""
	I0731 21:00:46.017392  188656 logs.go:276] 0 containers: []
	W0731 21:00:46.017404  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:46.017412  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:46.017478  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:46.054401  188656 cri.go:89] found id: ""
	I0731 21:00:46.054431  188656 logs.go:276] 0 containers: []
	W0731 21:00:46.054447  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:46.054454  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:46.054507  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:46.092107  188656 cri.go:89] found id: ""
	I0731 21:00:46.092130  188656 logs.go:276] 0 containers: []
	W0731 21:00:46.092137  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:46.092143  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:46.092190  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:46.128613  188656 cri.go:89] found id: ""
	I0731 21:00:46.128642  188656 logs.go:276] 0 containers: []
	W0731 21:00:46.128652  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:46.128665  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:46.128679  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:46.144539  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:46.144570  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:46.219399  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:46.219433  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:46.219448  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:46.304486  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:46.304529  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:46.344087  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:46.344121  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:46.090543  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:48.090607  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:46.111072  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:48.610316  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:50.611553  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:48.014287  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:50.513252  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:48.894728  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:48.916610  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:48.916675  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:48.978515  188656 cri.go:89] found id: ""
	I0731 21:00:48.978543  188656 logs.go:276] 0 containers: []
	W0731 21:00:48.978550  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:48.978557  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:48.978615  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:49.026224  188656 cri.go:89] found id: ""
	I0731 21:00:49.026257  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.026268  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:49.026276  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:49.026354  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:49.064967  188656 cri.go:89] found id: ""
	I0731 21:00:49.064994  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.065003  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:49.065010  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:49.065070  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:49.101966  188656 cri.go:89] found id: ""
	I0731 21:00:49.101990  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.101999  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:49.102004  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:49.102056  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:49.137775  188656 cri.go:89] found id: ""
	I0731 21:00:49.137801  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.137809  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:49.137815  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:49.137867  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:49.173778  188656 cri.go:89] found id: ""
	I0731 21:00:49.173824  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.173832  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:49.173839  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:49.173908  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:49.207211  188656 cri.go:89] found id: ""
	I0731 21:00:49.207239  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.207247  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:49.207254  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:49.207333  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:49.244126  188656 cri.go:89] found id: ""
	I0731 21:00:49.244159  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.244180  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:49.244202  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:49.244221  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:49.299606  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:49.299646  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:49.314093  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:49.314121  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:49.384691  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:49.384712  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:49.384728  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:49.464425  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:49.464462  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:52.005670  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:52.019617  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:52.019705  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:52.053452  188656 cri.go:89] found id: ""
	I0731 21:00:52.053485  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.053494  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:52.053500  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:52.053552  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:52.094462  188656 cri.go:89] found id: ""
	I0731 21:00:52.094495  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.094504  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:52.094510  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:52.094572  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:52.134555  188656 cri.go:89] found id: ""
	I0731 21:00:52.134584  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.134595  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:52.134602  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:52.134676  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:52.168805  188656 cri.go:89] found id: ""
	I0731 21:00:52.168851  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.168863  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:52.168871  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:52.168939  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:52.203093  188656 cri.go:89] found id: ""
	I0731 21:00:52.203121  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.203132  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:52.203140  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:52.203213  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:52.237816  188656 cri.go:89] found id: ""
	I0731 21:00:52.237842  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.237850  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:52.237857  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:52.237906  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:52.272136  188656 cri.go:89] found id: ""
	I0731 21:00:52.272175  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.272194  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:52.272202  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:52.272261  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:52.306616  188656 cri.go:89] found id: ""
	I0731 21:00:52.306641  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.306649  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:52.306659  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:52.306671  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:52.372668  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:52.372690  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:52.372707  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:52.457752  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:52.457794  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:52.496087  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:52.496129  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:52.548137  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:52.548176  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:50.588204  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:53.089737  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:53.110034  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:55.110293  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:52.514848  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:55.013623  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:57.015221  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:55.063463  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:55.076922  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:55.077005  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:55.117479  188656 cri.go:89] found id: ""
	I0731 21:00:55.117511  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.117523  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:55.117531  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:55.117595  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:55.156311  188656 cri.go:89] found id: ""
	I0731 21:00:55.156339  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.156348  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:55.156354  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:55.156421  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:55.196778  188656 cri.go:89] found id: ""
	I0731 21:00:55.196807  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.196818  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:55.196826  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:55.196898  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:55.237575  188656 cri.go:89] found id: ""
	I0731 21:00:55.237605  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.237614  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:55.237620  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:55.237672  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:55.271717  188656 cri.go:89] found id: ""
	I0731 21:00:55.271746  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.271754  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:55.271760  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:55.271811  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:55.307586  188656 cri.go:89] found id: ""
	I0731 21:00:55.307618  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.307630  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:55.307637  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:55.307708  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:55.343325  188656 cri.go:89] found id: ""
	I0731 21:00:55.343352  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.343361  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:55.343367  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:55.343418  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:55.378959  188656 cri.go:89] found id: ""
	I0731 21:00:55.378988  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.378997  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:55.379008  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:55.379021  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:55.454213  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:55.454243  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:55.454260  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:55.532802  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:55.532839  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:55.575903  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:55.575940  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:55.635105  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:55.635140  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:58.149801  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:58.162682  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:58.162743  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:58.196220  188656 cri.go:89] found id: ""
	I0731 21:00:58.196245  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.196254  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:58.196260  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:58.196313  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:58.231052  188656 cri.go:89] found id: ""
	I0731 21:00:58.231083  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.231093  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:58.231099  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:58.231156  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:58.265569  188656 cri.go:89] found id: ""
	I0731 21:00:58.265599  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.265612  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:58.265633  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:58.265695  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:58.300750  188656 cri.go:89] found id: ""
	I0731 21:00:58.300779  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.300788  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:58.300793  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:58.300869  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:58.333920  188656 cri.go:89] found id: ""
	I0731 21:00:58.333949  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.333958  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:58.333963  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:58.334015  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:58.368732  188656 cri.go:89] found id: ""
	I0731 21:00:58.368759  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.368771  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:58.368787  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:58.368855  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:58.408454  188656 cri.go:89] found id: ""
	I0731 21:00:58.408488  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.408501  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:58.408510  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:58.408575  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:58.445855  188656 cri.go:89] found id: ""
	I0731 21:00:58.445888  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.445900  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:58.445913  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:58.445934  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:58.496144  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:58.496177  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:58.510708  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:58.510743  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:58.580690  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:58.580712  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:58.580725  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:58.657281  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:58.657320  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:55.591068  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:58.088264  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:57.610282  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:59.611376  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:59.017831  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:01.514115  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:01.196374  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:01.209044  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:01.209111  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:01.247313  188656 cri.go:89] found id: ""
	I0731 21:01:01.247343  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.247353  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:01.247360  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:01.247443  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:01.282269  188656 cri.go:89] found id: ""
	I0731 21:01:01.282300  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.282308  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:01.282314  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:01.282370  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:01.315598  188656 cri.go:89] found id: ""
	I0731 21:01:01.315628  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.315638  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:01.315644  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:01.315697  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:01.352492  188656 cri.go:89] found id: ""
	I0731 21:01:01.352521  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.352533  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:01.352540  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:01.352605  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:01.387858  188656 cri.go:89] found id: ""
	I0731 21:01:01.387885  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.387894  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:01.387900  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:01.387950  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:01.425014  188656 cri.go:89] found id: ""
	I0731 21:01:01.425042  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.425052  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:01.425061  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:01.425129  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:01.463068  188656 cri.go:89] found id: ""
	I0731 21:01:01.463098  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.463107  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:01.463113  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:01.463171  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:01.500174  188656 cri.go:89] found id: ""
	I0731 21:01:01.500203  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.500214  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:01.500229  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:01.500244  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:01.554350  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:01.554389  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:01.569353  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:01.569394  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:01.641074  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:01.641095  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:01.641108  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:01.722340  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:01.722377  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:00.088915  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:02.089981  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:02.109888  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:04.109951  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:04.015302  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:06.513535  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:04.264035  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:04.278374  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:04.278441  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:04.314037  188656 cri.go:89] found id: ""
	I0731 21:01:04.314068  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.314079  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:04.314087  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:04.314159  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:04.347604  188656 cri.go:89] found id: ""
	I0731 21:01:04.347635  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.347646  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:04.347653  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:04.347718  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:04.382412  188656 cri.go:89] found id: ""
	I0731 21:01:04.382442  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.382454  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:04.382462  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:04.382516  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:04.419097  188656 cri.go:89] found id: ""
	I0731 21:01:04.419130  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.419142  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:04.419150  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:04.419209  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:04.464561  188656 cri.go:89] found id: ""
	I0731 21:01:04.464592  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.464601  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:04.464607  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:04.464683  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:04.500484  188656 cri.go:89] found id: ""
	I0731 21:01:04.500510  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.500518  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:04.500524  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:04.500577  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:04.536211  188656 cri.go:89] found id: ""
	I0731 21:01:04.536239  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.536250  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:04.536257  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:04.536324  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:04.569521  188656 cri.go:89] found id: ""
	I0731 21:01:04.569548  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.569556  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:04.569567  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:04.569583  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:04.621228  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:04.621261  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:04.637500  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:04.637527  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:04.710577  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:04.710606  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:04.710623  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:04.788305  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:04.788343  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:07.329209  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:07.343021  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:07.343089  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:07.378556  188656 cri.go:89] found id: ""
	I0731 21:01:07.378588  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.378603  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:07.378610  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:07.378679  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:07.416419  188656 cri.go:89] found id: ""
	I0731 21:01:07.416455  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.416467  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:07.416474  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:07.416538  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:07.454720  188656 cri.go:89] found id: ""
	I0731 21:01:07.454749  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.454758  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:07.454764  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:07.454815  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:07.488963  188656 cri.go:89] found id: ""
	I0731 21:01:07.488995  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.489004  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:07.489009  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:07.489060  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:07.531916  188656 cri.go:89] found id: ""
	I0731 21:01:07.531949  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.531961  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:07.531967  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:07.532019  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:07.569233  188656 cri.go:89] found id: ""
	I0731 21:01:07.569266  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.569275  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:07.569281  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:07.569350  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:07.606318  188656 cri.go:89] found id: ""
	I0731 21:01:07.606349  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.606360  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:07.606368  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:07.606442  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:07.641408  188656 cri.go:89] found id: ""
	I0731 21:01:07.641436  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.641445  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:07.641454  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:07.641466  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:07.681094  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:07.681123  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:07.734600  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:07.734641  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:07.748747  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:07.748779  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:07.821775  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:07.821799  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:07.821816  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:04.590174  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:07.089655  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:06.110694  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:08.610381  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:10.611128  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:09.013688  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:11.513361  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:10.399973  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:10.412908  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:10.412986  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:10.448866  188656 cri.go:89] found id: ""
	I0731 21:01:10.448895  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.448903  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:10.448909  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:10.448966  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:10.486309  188656 cri.go:89] found id: ""
	I0731 21:01:10.486338  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.486346  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:10.486352  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:10.486411  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:10.522834  188656 cri.go:89] found id: ""
	I0731 21:01:10.522856  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.522863  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:10.522870  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:10.522929  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:10.558272  188656 cri.go:89] found id: ""
	I0731 21:01:10.558304  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.558324  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:10.558330  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:10.558391  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:10.596560  188656 cri.go:89] found id: ""
	I0731 21:01:10.596589  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.596600  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:10.596608  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:10.596668  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:10.633488  188656 cri.go:89] found id: ""
	I0731 21:01:10.633518  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.633529  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:10.633537  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:10.633597  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:10.665779  188656 cri.go:89] found id: ""
	I0731 21:01:10.665812  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.665824  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:10.665832  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:10.665895  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:10.700526  188656 cri.go:89] found id: ""
	I0731 21:01:10.700556  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.700564  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:10.700575  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:10.700587  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:10.753507  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:10.753550  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:10.768056  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:10.768089  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:10.842120  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:10.842142  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:10.842159  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:10.916532  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:10.916565  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:13.456826  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:13.471064  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:13.471130  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:13.505660  188656 cri.go:89] found id: ""
	I0731 21:01:13.505694  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.505707  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:13.505713  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:13.505775  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:13.543084  188656 cri.go:89] found id: ""
	I0731 21:01:13.543109  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.543117  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:13.543123  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:13.543182  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:13.578940  188656 cri.go:89] found id: ""
	I0731 21:01:13.578966  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.578974  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:13.578981  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:13.579047  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:13.617710  188656 cri.go:89] found id: ""
	I0731 21:01:13.617733  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.617740  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:13.617747  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:13.617810  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:13.653535  188656 cri.go:89] found id: ""
	I0731 21:01:13.653567  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.653579  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:13.653587  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:13.653658  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:13.687914  188656 cri.go:89] found id: ""
	I0731 21:01:13.687942  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.687953  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:13.687960  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:13.688031  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:13.725242  188656 cri.go:89] found id: ""
	I0731 21:01:13.725278  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.725287  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:13.725293  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:13.725372  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:13.760890  188656 cri.go:89] found id: ""
	I0731 21:01:13.760918  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.760929  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:13.760943  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:13.760958  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:13.810212  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:13.810252  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:13.824229  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:13.824259  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:01:09.588945  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:12.088514  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:14.088684  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:13.109760  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:15.109938  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:13.515603  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:16.013268  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	W0731 21:01:13.895306  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:13.895331  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:13.895344  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:13.976366  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:13.976411  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:16.520165  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:16.533970  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:16.534035  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:16.571444  188656 cri.go:89] found id: ""
	I0731 21:01:16.571474  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.571482  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:16.571488  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:16.571539  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:16.608150  188656 cri.go:89] found id: ""
	I0731 21:01:16.608176  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.608186  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:16.608194  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:16.608254  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:16.643252  188656 cri.go:89] found id: ""
	I0731 21:01:16.643283  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.643294  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:16.643302  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:16.643363  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:16.679521  188656 cri.go:89] found id: ""
	I0731 21:01:16.679552  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.679563  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:16.679571  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:16.679624  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:16.713502  188656 cri.go:89] found id: ""
	I0731 21:01:16.713532  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.713541  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:16.713547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:16.713624  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:16.748276  188656 cri.go:89] found id: ""
	I0731 21:01:16.748309  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.748318  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:16.748324  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:16.748383  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:16.783895  188656 cri.go:89] found id: ""
	I0731 21:01:16.783929  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.783940  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:16.783948  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:16.784014  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:16.817362  188656 cri.go:89] found id: ""
	I0731 21:01:16.817392  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.817415  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:16.817425  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:16.817440  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:16.872584  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:16.872637  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:16.887240  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:16.887275  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:16.961920  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:16.961949  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:16.961967  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:17.041889  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:17.041924  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:16.089420  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:18.089611  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:17.110442  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:19.111424  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:18.013772  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:20.514737  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:19.585935  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:19.600389  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:19.600475  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:19.635883  188656 cri.go:89] found id: ""
	I0731 21:01:19.635913  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.635924  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:19.635932  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:19.635995  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:19.674413  188656 cri.go:89] found id: ""
	I0731 21:01:19.674441  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.674459  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:19.674471  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:19.674538  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:19.708181  188656 cri.go:89] found id: ""
	I0731 21:01:19.708211  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.708219  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:19.708224  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:19.708292  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:19.744737  188656 cri.go:89] found id: ""
	I0731 21:01:19.744774  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.744783  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:19.744791  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:19.744849  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:19.784366  188656 cri.go:89] found id: ""
	I0731 21:01:19.784398  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.784406  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:19.784412  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:19.784465  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:19.819234  188656 cri.go:89] found id: ""
	I0731 21:01:19.819269  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.819280  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:19.819289  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:19.819355  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:19.851462  188656 cri.go:89] found id: ""
	I0731 21:01:19.851494  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.851503  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:19.851510  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:19.851563  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:19.896575  188656 cri.go:89] found id: ""
	I0731 21:01:19.896604  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.896612  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:19.896624  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:19.896640  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:19.952239  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:19.952284  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:19.969411  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:19.969442  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:20.042820  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:20.042847  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:20.042863  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:20.130070  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:20.130115  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:22.674956  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:22.688548  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:22.688616  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:22.728750  188656 cri.go:89] found id: ""
	I0731 21:01:22.728775  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.728784  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:22.728790  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:22.728844  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:22.763765  188656 cri.go:89] found id: ""
	I0731 21:01:22.763793  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.763801  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:22.763807  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:22.763858  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:22.799134  188656 cri.go:89] found id: ""
	I0731 21:01:22.799163  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.799172  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:22.799178  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:22.799237  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:22.833972  188656 cri.go:89] found id: ""
	I0731 21:01:22.833998  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.834005  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:22.834011  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:22.834060  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:22.869686  188656 cri.go:89] found id: ""
	I0731 21:01:22.869711  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.869719  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:22.869724  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:22.869776  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:22.907919  188656 cri.go:89] found id: ""
	I0731 21:01:22.907950  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.907961  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:22.907969  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:22.908035  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:22.947162  188656 cri.go:89] found id: ""
	I0731 21:01:22.947192  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.947204  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:22.947212  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:22.947273  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:22.992822  188656 cri.go:89] found id: ""
	I0731 21:01:22.992860  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.992872  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:22.992884  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:22.992900  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:23.045552  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:23.045589  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:23.059895  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:23.059925  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:23.135535  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:23.135561  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:23.135577  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:23.217468  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:23.217521  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:20.588507  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:22.588759  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:21.611467  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:24.110813  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:22.514805  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:25.012583  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:27.013095  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:25.771615  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:25.785037  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:25.785115  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:25.821070  188656 cri.go:89] found id: ""
	I0731 21:01:25.821100  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.821112  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:25.821120  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:25.821176  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:25.856174  188656 cri.go:89] found id: ""
	I0731 21:01:25.856206  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.856217  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:25.856225  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:25.856288  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:25.889440  188656 cri.go:89] found id: ""
	I0731 21:01:25.889473  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.889483  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:25.889490  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:25.889546  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:25.924770  188656 cri.go:89] found id: ""
	I0731 21:01:25.924796  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.924804  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:25.924811  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:25.924860  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:25.963529  188656 cri.go:89] found id: ""
	I0731 21:01:25.963576  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.963588  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:25.963595  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:25.963670  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:26.000033  188656 cri.go:89] found id: ""
	I0731 21:01:26.000060  188656 logs.go:276] 0 containers: []
	W0731 21:01:26.000069  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:26.000076  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:26.000133  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:26.035310  188656 cri.go:89] found id: ""
	I0731 21:01:26.035341  188656 logs.go:276] 0 containers: []
	W0731 21:01:26.035353  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:26.035359  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:26.035423  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:26.070096  188656 cri.go:89] found id: ""
	I0731 21:01:26.070119  188656 logs.go:276] 0 containers: []
	W0731 21:01:26.070127  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:26.070138  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:26.070149  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:26.141198  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:26.141220  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:26.141237  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:26.219766  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:26.219805  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:26.264836  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:26.264864  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:26.316672  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:26.316709  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:28.832882  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:24.588907  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:27.088961  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:29.089538  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:26.111336  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:28.609453  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:30.610379  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:29.014929  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:31.512827  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:28.846243  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:28.846307  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:28.880312  188656 cri.go:89] found id: ""
	I0731 21:01:28.880339  188656 logs.go:276] 0 containers: []
	W0731 21:01:28.880350  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:28.880358  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:28.880419  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:28.914625  188656 cri.go:89] found id: ""
	I0731 21:01:28.914652  188656 logs.go:276] 0 containers: []
	W0731 21:01:28.914660  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:28.914667  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:28.914726  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:28.949138  188656 cri.go:89] found id: ""
	I0731 21:01:28.949173  188656 logs.go:276] 0 containers: []
	W0731 21:01:28.949185  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:28.949192  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:28.949264  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:28.985229  188656 cri.go:89] found id: ""
	I0731 21:01:28.985258  188656 logs.go:276] 0 containers: []
	W0731 21:01:28.985266  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:28.985272  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:28.985326  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:29.021520  188656 cri.go:89] found id: ""
	I0731 21:01:29.021550  188656 logs.go:276] 0 containers: []
	W0731 21:01:29.021562  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:29.021568  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:29.021629  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:29.058639  188656 cri.go:89] found id: ""
	I0731 21:01:29.058671  188656 logs.go:276] 0 containers: []
	W0731 21:01:29.058682  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:29.058690  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:29.058755  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:29.105435  188656 cri.go:89] found id: ""
	I0731 21:01:29.105458  188656 logs.go:276] 0 containers: []
	W0731 21:01:29.105466  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:29.105472  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:29.105528  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:29.147118  188656 cri.go:89] found id: ""
	I0731 21:01:29.147144  188656 logs.go:276] 0 containers: []
	W0731 21:01:29.147152  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:29.147161  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:29.147177  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:29.231698  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:29.231735  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:29.276163  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:29.276200  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:29.330551  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:29.330589  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:29.350293  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:29.350323  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:29.456073  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:31.956964  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:31.970712  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:31.970780  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:32.009546  188656 cri.go:89] found id: ""
	I0731 21:01:32.009574  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.009585  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:32.009593  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:32.009674  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:32.046622  188656 cri.go:89] found id: ""
	I0731 21:01:32.046661  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.046672  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:32.046680  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:32.046748  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:32.080958  188656 cri.go:89] found id: ""
	I0731 21:01:32.080985  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.080993  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:32.080998  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:32.081052  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:32.117454  188656 cri.go:89] found id: ""
	I0731 21:01:32.117480  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.117489  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:32.117495  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:32.117561  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:32.152335  188656 cri.go:89] found id: ""
	I0731 21:01:32.152369  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.152380  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:32.152387  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:32.152441  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:32.186631  188656 cri.go:89] found id: ""
	I0731 21:01:32.186670  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.186682  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:32.186691  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:32.186761  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:32.221496  188656 cri.go:89] found id: ""
	I0731 21:01:32.221533  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.221544  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:32.221551  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:32.221632  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:32.256315  188656 cri.go:89] found id: ""
	I0731 21:01:32.256341  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.256350  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:32.256360  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:32.256372  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:32.295759  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:32.295788  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:32.347855  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:32.347888  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:32.360982  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:32.361012  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:32.433900  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:32.433926  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:32.433947  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:31.588474  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:33.590513  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:32.610672  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:35.110698  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:33.514600  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:36.013157  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:35.013369  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:35.027203  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:35.027298  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:35.065567  188656 cri.go:89] found id: ""
	I0731 21:01:35.065599  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.065610  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:35.065617  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:35.065686  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:35.104285  188656 cri.go:89] found id: ""
	I0731 21:01:35.104317  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.104328  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:35.104335  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:35.104430  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:35.151081  188656 cri.go:89] found id: ""
	I0731 21:01:35.151108  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.151119  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:35.151127  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:35.151190  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:35.196844  188656 cri.go:89] found id: ""
	I0731 21:01:35.196875  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.196886  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:35.196894  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:35.196964  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:35.253581  188656 cri.go:89] found id: ""
	I0731 21:01:35.253612  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.253623  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:35.253630  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:35.253703  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:35.295791  188656 cri.go:89] found id: ""
	I0731 21:01:35.295819  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.295830  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:35.295838  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:35.295904  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:35.329405  188656 cri.go:89] found id: ""
	I0731 21:01:35.329441  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.329454  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:35.329462  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:35.329526  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:35.363976  188656 cri.go:89] found id: ""
	I0731 21:01:35.364009  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.364022  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:35.364035  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:35.364051  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:35.421213  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:35.421253  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:35.436612  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:35.436646  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:35.514154  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:35.514182  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:35.514197  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:35.588048  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:35.588082  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:38.133466  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:38.147071  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:38.147142  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:38.179992  188656 cri.go:89] found id: ""
	I0731 21:01:38.180024  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.180036  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:38.180044  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:38.180116  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:38.213784  188656 cri.go:89] found id: ""
	I0731 21:01:38.213816  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.213827  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:38.213834  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:38.213901  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:38.254190  188656 cri.go:89] found id: ""
	I0731 21:01:38.254220  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.254229  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:38.254235  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:38.254284  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:38.289695  188656 cri.go:89] found id: ""
	I0731 21:01:38.289732  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.289743  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:38.289751  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:38.289819  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:38.327743  188656 cri.go:89] found id: ""
	I0731 21:01:38.327777  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.327788  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:38.327797  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:38.327853  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:38.361373  188656 cri.go:89] found id: ""
	I0731 21:01:38.361409  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.361421  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:38.361428  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:38.361501  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:38.396832  188656 cri.go:89] found id: ""
	I0731 21:01:38.396860  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.396868  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:38.396873  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:38.396923  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:38.431822  188656 cri.go:89] found id: ""
	I0731 21:01:38.431855  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.431868  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:38.431880  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:38.431895  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:38.481994  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:38.482028  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:38.495885  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:38.495911  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:38.563384  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:38.563411  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:38.563437  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:38.646806  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:38.646848  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:36.089465  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:38.590301  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:37.611057  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:40.110731  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:38.015769  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:40.513690  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:41.187323  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:41.200995  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:41.201063  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:41.241620  188656 cri.go:89] found id: ""
	I0731 21:01:41.241651  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.241663  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:41.241671  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:41.241745  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:41.279565  188656 cri.go:89] found id: ""
	I0731 21:01:41.279595  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.279604  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:41.279609  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:41.279666  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:41.320710  188656 cri.go:89] found id: ""
	I0731 21:01:41.320744  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.320755  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:41.320763  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:41.320834  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:41.356428  188656 cri.go:89] found id: ""
	I0731 21:01:41.356460  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.356472  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:41.356480  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:41.356544  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:41.390493  188656 cri.go:89] found id: ""
	I0731 21:01:41.390525  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.390536  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:41.390544  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:41.390612  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:41.424244  188656 cri.go:89] found id: ""
	I0731 21:01:41.424271  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.424282  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:41.424290  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:41.424350  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:41.459916  188656 cri.go:89] found id: ""
	I0731 21:01:41.459946  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.459955  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:41.459961  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:41.460012  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:41.493891  188656 cri.go:89] found id: ""
	I0731 21:01:41.493917  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.493926  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:41.493936  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:41.493950  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:41.544066  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:41.544106  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:41.558504  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:41.558534  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:41.632996  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:41.633021  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:41.633039  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:41.712637  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:41.712677  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:41.087979  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:43.088834  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:42.610136  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:45.109986  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:42.514059  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:44.514535  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:47.014970  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:44.255947  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:44.268961  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:44.269050  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:44.304621  188656 cri.go:89] found id: ""
	I0731 21:01:44.304656  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.304668  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:44.304676  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:44.304732  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:44.339389  188656 cri.go:89] found id: ""
	I0731 21:01:44.339429  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.339441  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:44.339448  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:44.339510  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:44.373069  188656 cri.go:89] found id: ""
	I0731 21:01:44.373095  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.373103  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:44.373110  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:44.373179  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:44.408784  188656 cri.go:89] found id: ""
	I0731 21:01:44.408812  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.408821  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:44.408829  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:44.408896  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:44.445636  188656 cri.go:89] found id: ""
	I0731 21:01:44.445671  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.445682  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:44.445690  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:44.445759  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:44.483529  188656 cri.go:89] found id: ""
	I0731 21:01:44.483565  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.483577  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:44.483585  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:44.483643  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:44.517959  188656 cri.go:89] found id: ""
	I0731 21:01:44.517980  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.517987  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:44.517993  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:44.518042  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:44.552322  188656 cri.go:89] found id: ""
	I0731 21:01:44.552367  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.552392  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:44.552405  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:44.552421  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:44.625005  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:44.625030  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:44.625043  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:44.702547  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:44.702585  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:44.741754  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:44.741792  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:44.795179  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:44.795216  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:47.309995  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:47.323993  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:47.324076  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:47.365546  188656 cri.go:89] found id: ""
	I0731 21:01:47.365576  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.365587  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:47.365595  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:47.365682  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:47.402774  188656 cri.go:89] found id: ""
	I0731 21:01:47.402810  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.402822  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:47.402831  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:47.402899  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:47.440716  188656 cri.go:89] found id: ""
	I0731 21:01:47.440746  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.440755  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:47.440761  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:47.440811  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:47.479418  188656 cri.go:89] found id: ""
	I0731 21:01:47.479450  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.479461  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:47.479469  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:47.479535  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:47.514027  188656 cri.go:89] found id: ""
	I0731 21:01:47.514065  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.514074  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:47.514081  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:47.514149  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:47.550178  188656 cri.go:89] found id: ""
	I0731 21:01:47.550203  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.550212  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:47.550218  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:47.550271  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:47.587844  188656 cri.go:89] found id: ""
	I0731 21:01:47.587873  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.587883  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:47.587891  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:47.587945  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:47.627581  188656 cri.go:89] found id: ""
	I0731 21:01:47.627608  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.627620  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:47.627633  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:47.627647  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:47.683364  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:47.683408  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:47.697882  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:47.697917  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:47.773804  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:47.773834  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:47.773848  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:47.859356  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:47.859404  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:45.090199  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:47.091328  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:47.610631  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:50.109476  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:49.514186  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:52.013486  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:50.402403  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:50.417269  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:50.417332  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:50.452762  188656 cri.go:89] found id: ""
	I0731 21:01:50.452786  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.452793  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:50.452799  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:50.452852  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:50.486741  188656 cri.go:89] found id: ""
	I0731 21:01:50.486771  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.486782  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:50.486789  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:50.486855  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:50.526144  188656 cri.go:89] found id: ""
	I0731 21:01:50.526174  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.526185  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:50.526193  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:50.526246  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:50.560957  188656 cri.go:89] found id: ""
	I0731 21:01:50.560985  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.560995  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:50.561003  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:50.561065  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:50.597228  188656 cri.go:89] found id: ""
	I0731 21:01:50.597258  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.597269  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:50.597275  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:50.597357  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:50.638153  188656 cri.go:89] found id: ""
	I0731 21:01:50.638183  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.638199  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:50.638208  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:50.638270  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:50.672236  188656 cri.go:89] found id: ""
	I0731 21:01:50.672266  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.672274  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:50.672280  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:50.672340  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:50.704069  188656 cri.go:89] found id: ""
	I0731 21:01:50.704093  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.704102  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:50.704112  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:50.704125  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:50.757973  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:50.758010  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:50.771203  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:50.771229  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:50.842937  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:50.842956  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:50.842969  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:50.925819  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:50.925857  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:53.470691  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:53.485260  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:53.485332  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:53.524110  188656 cri.go:89] found id: ""
	I0731 21:01:53.524139  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.524148  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:53.524154  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:53.524215  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:53.557642  188656 cri.go:89] found id: ""
	I0731 21:01:53.557668  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.557676  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:53.557682  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:53.557737  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:53.595594  188656 cri.go:89] found id: ""
	I0731 21:01:53.595622  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.595641  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:53.595647  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:53.595712  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:53.634458  188656 cri.go:89] found id: ""
	I0731 21:01:53.634487  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.634499  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:53.634507  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:53.634567  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:53.674124  188656 cri.go:89] found id: ""
	I0731 21:01:53.674149  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.674157  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:53.674164  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:53.674234  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:53.706861  188656 cri.go:89] found id: ""
	I0731 21:01:53.706888  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.706897  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:53.706903  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:53.706957  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:53.745476  188656 cri.go:89] found id: ""
	I0731 21:01:53.745504  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.745511  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:53.745522  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:53.745575  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:53.780847  188656 cri.go:89] found id: ""
	I0731 21:01:53.780878  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.780889  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:53.780902  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:53.780922  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:01:49.589017  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:52.088587  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:54.088885  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:52.109889  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:54.110634  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:54.014383  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:56.512884  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	W0731 21:01:53.853469  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:53.853497  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:53.853517  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:53.930506  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:53.930544  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:53.975439  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:53.975475  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:54.027903  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:54.027937  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:56.542860  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:56.557744  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:56.557813  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:56.596034  188656 cri.go:89] found id: ""
	I0731 21:01:56.596065  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.596075  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:56.596082  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:56.596146  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:56.631531  188656 cri.go:89] found id: ""
	I0731 21:01:56.631561  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.631572  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:56.631579  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:56.631653  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:56.665824  188656 cri.go:89] found id: ""
	I0731 21:01:56.665853  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.665865  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:56.665872  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:56.665940  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:56.698965  188656 cri.go:89] found id: ""
	I0731 21:01:56.698993  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.699002  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:56.699008  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:56.699074  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:56.735314  188656 cri.go:89] found id: ""
	I0731 21:01:56.735347  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.735359  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:56.735367  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:56.735443  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:56.770350  188656 cri.go:89] found id: ""
	I0731 21:01:56.770383  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.770393  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:56.770402  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:56.770485  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:56.808934  188656 cri.go:89] found id: ""
	I0731 21:01:56.808962  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.808970  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:56.808976  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:56.809027  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:56.845305  188656 cri.go:89] found id: ""
	I0731 21:01:56.845331  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.845354  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:56.845366  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:56.845383  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:56.922810  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:56.922832  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:56.922846  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:56.998009  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:56.998046  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:57.037905  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:57.037934  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:57.092438  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:57.092469  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:56.591334  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:59.089696  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:56.110825  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:58.111013  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:00.111696  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:58.513270  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:00.514474  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:59.608087  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:59.622465  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:59.622537  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:59.660221  188656 cri.go:89] found id: ""
	I0731 21:01:59.660254  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.660265  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:59.660274  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:59.660338  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:59.696158  188656 cri.go:89] found id: ""
	I0731 21:01:59.696193  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.696205  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:59.696213  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:59.696272  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:59.733607  188656 cri.go:89] found id: ""
	I0731 21:01:59.733635  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.733646  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:59.733656  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:59.733727  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:59.770298  188656 cri.go:89] found id: ""
	I0731 21:01:59.770327  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.770336  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:59.770342  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:59.770396  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:59.805630  188656 cri.go:89] found id: ""
	I0731 21:01:59.805659  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.805670  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:59.805682  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:59.805749  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:59.841064  188656 cri.go:89] found id: ""
	I0731 21:01:59.841089  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.841098  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:59.841106  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:59.841166  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:59.877237  188656 cri.go:89] found id: ""
	I0731 21:01:59.877265  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.877274  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:59.877284  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:59.877364  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:59.917102  188656 cri.go:89] found id: ""
	I0731 21:01:59.917138  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.917166  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:59.917179  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:59.917196  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:59.971806  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:59.971846  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:59.986267  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:59.986304  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:00.063185  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:00.063227  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:00.063244  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:00.148498  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:00.148541  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:02.690235  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:02.704623  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:02.704703  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:02.740557  188656 cri.go:89] found id: ""
	I0731 21:02:02.740588  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.740599  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:02.740606  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:02.740667  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:02.776340  188656 cri.go:89] found id: ""
	I0731 21:02:02.776382  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.776391  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:02.776396  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:02.776449  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:02.811645  188656 cri.go:89] found id: ""
	I0731 21:02:02.811673  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.811683  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:02.811691  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:02.811754  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:02.847226  188656 cri.go:89] found id: ""
	I0731 21:02:02.847259  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.847267  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:02.847273  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:02.847326  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:02.885591  188656 cri.go:89] found id: ""
	I0731 21:02:02.885617  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.885626  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:02.885631  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:02.885694  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:02.924250  188656 cri.go:89] found id: ""
	I0731 21:02:02.924281  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.924289  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:02.924296  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:02.924358  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:02.959608  188656 cri.go:89] found id: ""
	I0731 21:02:02.959638  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.959649  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:02.959657  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:02.959731  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:02.998175  188656 cri.go:89] found id: ""
	I0731 21:02:02.998205  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.998215  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:02.998228  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:02.998248  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:03.053320  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:03.053382  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:03.067681  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:03.067711  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:03.145222  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:03.145251  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:03.145270  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:03.228413  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:03.228456  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:01.590197  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:04.087692  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:02.610477  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:05.110544  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:03.016030  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:05.513082  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:05.780407  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:05.793872  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:05.793952  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:05.828940  188656 cri.go:89] found id: ""
	I0731 21:02:05.828971  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.828980  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:05.828987  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:05.829051  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:05.866470  188656 cri.go:89] found id: ""
	I0731 21:02:05.866503  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.866515  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:05.866522  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:05.866594  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:05.904756  188656 cri.go:89] found id: ""
	I0731 21:02:05.904792  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.904807  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:05.904814  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:05.904868  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:05.941534  188656 cri.go:89] found id: ""
	I0731 21:02:05.941564  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.941574  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:05.941581  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:05.941649  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:05.980413  188656 cri.go:89] found id: ""
	I0731 21:02:05.980453  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.980465  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:05.980472  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:05.980563  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:06.023226  188656 cri.go:89] found id: ""
	I0731 21:02:06.023258  188656 logs.go:276] 0 containers: []
	W0731 21:02:06.023269  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:06.023277  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:06.023345  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:06.061098  188656 cri.go:89] found id: ""
	I0731 21:02:06.061130  188656 logs.go:276] 0 containers: []
	W0731 21:02:06.061138  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:06.061145  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:06.061195  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:06.097825  188656 cri.go:89] found id: ""
	I0731 21:02:06.097852  188656 logs.go:276] 0 containers: []
	W0731 21:02:06.097860  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:06.097870  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:06.097883  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:06.149181  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:06.149223  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:06.164610  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:06.164651  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:06.248639  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:06.248666  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:06.248684  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:06.332445  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:06.332486  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:06.089967  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:08.588610  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:07.610691  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:09.611166  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:07.513999  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:09.514554  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:11.516493  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:08.873697  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:08.887632  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:08.887745  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:08.926002  188656 cri.go:89] found id: ""
	I0731 21:02:08.926032  188656 logs.go:276] 0 containers: []
	W0731 21:02:08.926042  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:08.926051  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:08.926117  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:08.962999  188656 cri.go:89] found id: ""
	I0731 21:02:08.963028  188656 logs.go:276] 0 containers: []
	W0731 21:02:08.963039  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:08.963047  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:08.963103  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:09.023016  188656 cri.go:89] found id: ""
	I0731 21:02:09.023043  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.023051  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:09.023057  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:09.023109  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:09.059672  188656 cri.go:89] found id: ""
	I0731 21:02:09.059699  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.059708  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:09.059714  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:09.059774  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:09.097603  188656 cri.go:89] found id: ""
	I0731 21:02:09.097635  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.097645  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:09.097653  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:09.097720  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:09.136210  188656 cri.go:89] found id: ""
	I0731 21:02:09.136240  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.136251  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:09.136259  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:09.136326  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:09.176167  188656 cri.go:89] found id: ""
	I0731 21:02:09.176204  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.176211  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:09.176218  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:09.176277  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:09.214151  188656 cri.go:89] found id: ""
	I0731 21:02:09.214180  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.214189  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:09.214199  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:09.214212  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:09.267579  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:09.267618  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:09.282420  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:09.282445  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:09.354067  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:09.354092  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:09.354111  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:09.433454  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:09.433500  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:11.979715  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:11.993050  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:11.993123  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:12.027731  188656 cri.go:89] found id: ""
	I0731 21:02:12.027759  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.027767  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:12.027773  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:12.027834  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:12.064410  188656 cri.go:89] found id: ""
	I0731 21:02:12.064442  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.064452  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:12.064459  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:12.064525  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:12.101061  188656 cri.go:89] found id: ""
	I0731 21:02:12.101096  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.101107  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:12.101115  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:12.101176  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:12.142240  188656 cri.go:89] found id: ""
	I0731 21:02:12.142271  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.142284  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:12.142292  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:12.142357  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:12.184949  188656 cri.go:89] found id: ""
	I0731 21:02:12.184980  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.184988  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:12.184994  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:12.185064  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:12.226031  188656 cri.go:89] found id: ""
	I0731 21:02:12.226068  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.226080  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:12.226089  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:12.226155  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:12.272880  188656 cri.go:89] found id: ""
	I0731 21:02:12.272913  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.272923  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:12.272931  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:12.272989  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:12.306968  188656 cri.go:89] found id: ""
	I0731 21:02:12.307011  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.307033  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:12.307068  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:12.307090  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:12.359357  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:12.359402  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:12.374817  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:12.374848  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:12.445107  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:12.445128  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:12.445141  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:12.530017  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:12.530058  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:11.088281  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:13.090442  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:12.110720  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:14.611142  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:14.013967  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:16.014021  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:15.070277  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:15.084326  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:15.084411  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:15.123513  188656 cri.go:89] found id: ""
	I0731 21:02:15.123549  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.123562  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:15.123569  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:15.123624  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:15.159855  188656 cri.go:89] found id: ""
	I0731 21:02:15.159888  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.159899  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:15.159908  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:15.159973  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:15.195879  188656 cri.go:89] found id: ""
	I0731 21:02:15.195911  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.195919  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:15.195926  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:15.195986  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:15.231216  188656 cri.go:89] found id: ""
	I0731 21:02:15.231249  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.231258  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:15.231265  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:15.231331  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:15.265711  188656 cri.go:89] found id: ""
	I0731 21:02:15.265740  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.265748  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:15.265754  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:15.265803  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:15.300991  188656 cri.go:89] found id: ""
	I0731 21:02:15.301020  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.301027  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:15.301033  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:15.301083  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:15.338507  188656 cri.go:89] found id: ""
	I0731 21:02:15.338533  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.338542  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:15.338550  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:15.338614  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:15.375540  188656 cri.go:89] found id: ""
	I0731 21:02:15.375583  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.375595  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:15.375606  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:15.375631  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:15.428903  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:15.428946  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:15.444018  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:15.444052  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:15.518807  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:15.518842  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:15.518859  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:15.602655  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:15.602693  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:18.158731  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:18.172861  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:18.172940  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:18.207451  188656 cri.go:89] found id: ""
	I0731 21:02:18.207480  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.207489  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:18.207495  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:18.207555  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:18.244974  188656 cri.go:89] found id: ""
	I0731 21:02:18.245004  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.245013  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:18.245019  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:18.245079  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:18.281589  188656 cri.go:89] found id: ""
	I0731 21:02:18.281622  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.281630  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:18.281637  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:18.281698  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:18.321413  188656 cri.go:89] found id: ""
	I0731 21:02:18.321445  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.321455  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:18.321461  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:18.321526  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:18.360600  188656 cri.go:89] found id: ""
	I0731 21:02:18.360627  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.360639  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:18.360647  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:18.360707  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:18.396312  188656 cri.go:89] found id: ""
	I0731 21:02:18.396344  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.396356  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:18.396364  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:18.396451  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:18.431586  188656 cri.go:89] found id: ""
	I0731 21:02:18.431618  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.431630  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:18.431637  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:18.431711  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:18.472995  188656 cri.go:89] found id: ""
	I0731 21:02:18.473025  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.473035  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:18.473047  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:18.473063  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:18.558826  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:18.558865  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:18.600083  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:18.600110  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:18.657944  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:18.657988  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:18.672860  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:18.672888  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:18.748806  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:15.589795  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:18.088699  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:17.112784  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:19.609312  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:18.513798  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:21.014437  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:21.249418  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:21.263304  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:21.263385  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:21.298591  188656 cri.go:89] found id: ""
	I0731 21:02:21.298624  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.298635  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:21.298643  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:21.298707  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:21.335913  188656 cri.go:89] found id: ""
	I0731 21:02:21.335939  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.335947  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:21.335954  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:21.336011  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:21.378314  188656 cri.go:89] found id: ""
	I0731 21:02:21.378347  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.378359  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:21.378368  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:21.378436  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:21.422707  188656 cri.go:89] found id: ""
	I0731 21:02:21.422738  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.422748  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:21.422757  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:21.422826  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:21.487851  188656 cri.go:89] found id: ""
	I0731 21:02:21.487878  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.487887  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:21.487893  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:21.487946  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:21.528944  188656 cri.go:89] found id: ""
	I0731 21:02:21.528970  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.528981  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:21.528990  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:21.529054  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:21.565091  188656 cri.go:89] found id: ""
	I0731 21:02:21.565118  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.565126  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:21.565132  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:21.565182  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:21.599985  188656 cri.go:89] found id: ""
	I0731 21:02:21.600015  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.600027  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:21.600041  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:21.600057  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:21.652065  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:21.652106  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:21.666497  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:21.666528  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:21.741853  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:21.741893  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:21.741919  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:21.822478  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:21.822517  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:20.089186  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:22.589558  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:21.610996  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:24.111590  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:23.513209  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:25.514400  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:24.363018  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:24.375640  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:24.375704  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:24.411383  188656 cri.go:89] found id: ""
	I0731 21:02:24.411416  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.411427  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:24.411436  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:24.411513  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:24.447536  188656 cri.go:89] found id: ""
	I0731 21:02:24.447565  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.447573  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:24.447578  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:24.447651  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:24.489270  188656 cri.go:89] found id: ""
	I0731 21:02:24.489301  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.489311  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:24.489320  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:24.489398  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:24.527891  188656 cri.go:89] found id: ""
	I0731 21:02:24.527922  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.527932  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:24.527938  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:24.527998  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:24.566854  188656 cri.go:89] found id: ""
	I0731 21:02:24.566886  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.566897  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:24.566904  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:24.566974  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:24.606234  188656 cri.go:89] found id: ""
	I0731 21:02:24.606267  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.606278  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:24.606285  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:24.606357  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:24.642880  188656 cri.go:89] found id: ""
	I0731 21:02:24.642909  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.642921  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:24.642929  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:24.642982  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:24.680069  188656 cri.go:89] found id: ""
	I0731 21:02:24.680101  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.680112  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:24.680124  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:24.680142  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:24.735337  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:24.735378  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:24.749010  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:24.749040  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:24.826406  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:24.826441  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:24.826458  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:24.906995  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:24.907049  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:27.451405  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:27.474178  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:27.474251  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:27.514912  188656 cri.go:89] found id: ""
	I0731 21:02:27.514938  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.514945  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:27.514951  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:27.515007  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:27.552850  188656 cri.go:89] found id: ""
	I0731 21:02:27.552880  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.552890  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:27.552896  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:27.552953  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:27.590468  188656 cri.go:89] found id: ""
	I0731 21:02:27.590496  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.590503  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:27.590509  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:27.590572  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:27.626295  188656 cri.go:89] found id: ""
	I0731 21:02:27.626322  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.626330  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:27.626339  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:27.626391  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:27.662654  188656 cri.go:89] found id: ""
	I0731 21:02:27.662690  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.662701  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:27.662708  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:27.662770  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:27.699528  188656 cri.go:89] found id: ""
	I0731 21:02:27.699558  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.699566  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:27.699572  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:27.699639  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:27.740501  188656 cri.go:89] found id: ""
	I0731 21:02:27.740528  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.740539  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:27.740547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:27.740613  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:27.778919  188656 cri.go:89] found id: ""
	I0731 21:02:27.778954  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.778966  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:27.778980  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:27.778999  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:27.815475  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:27.815500  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:27.866578  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:27.866615  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:27.880799  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:27.880830  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:27.948987  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:27.949014  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:27.949032  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:24.596180  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:27.088624  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:26.610897  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:29.110263  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:28.014828  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:30.514006  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:30.532314  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:30.546245  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:30.546317  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:30.581736  188656 cri.go:89] found id: ""
	I0731 21:02:30.581763  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.581772  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:30.581778  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:30.581837  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:30.618790  188656 cri.go:89] found id: ""
	I0731 21:02:30.618816  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.618824  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:30.618830  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:30.618886  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:30.654504  188656 cri.go:89] found id: ""
	I0731 21:02:30.654530  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.654538  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:30.654544  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:30.654603  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:30.690570  188656 cri.go:89] found id: ""
	I0731 21:02:30.690598  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.690609  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:30.690617  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:30.690683  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:30.739676  188656 cri.go:89] found id: ""
	I0731 21:02:30.739705  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.739715  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:30.739723  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:30.739789  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:30.777860  188656 cri.go:89] found id: ""
	I0731 21:02:30.777891  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.777902  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:30.777911  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:30.777995  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:30.814036  188656 cri.go:89] found id: ""
	I0731 21:02:30.814073  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.814088  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:30.814096  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:30.814168  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:30.847262  188656 cri.go:89] found id: ""
	I0731 21:02:30.847292  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.847304  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:30.847316  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:30.847338  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:30.898556  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:30.898596  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:30.912940  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:30.912974  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:30.987384  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:30.987405  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:30.987419  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:31.071376  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:31.071416  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:33.613677  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:33.628304  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:33.628380  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:33.662932  188656 cri.go:89] found id: ""
	I0731 21:02:33.662965  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.662977  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:33.662985  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:33.663055  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:33.697445  188656 cri.go:89] found id: ""
	I0731 21:02:33.697477  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.697487  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:33.697493  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:33.697553  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:33.734480  188656 cri.go:89] found id: ""
	I0731 21:02:33.734516  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.734527  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:33.734536  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:33.734614  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:33.770069  188656 cri.go:89] found id: ""
	I0731 21:02:33.770095  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.770104  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:33.770111  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:33.770194  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:33.806315  188656 cri.go:89] found id: ""
	I0731 21:02:33.806341  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.806350  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:33.806356  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:33.806408  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:29.592432  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:32.088842  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:34.089378  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:31.112420  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:33.611815  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:33.014022  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:35.014517  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:37.018514  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:33.842747  188656 cri.go:89] found id: ""
	I0731 21:02:33.842775  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.842782  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:33.842789  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:33.842856  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:33.877581  188656 cri.go:89] found id: ""
	I0731 21:02:33.877607  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.877616  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:33.877622  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:33.877682  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:33.913238  188656 cri.go:89] found id: ""
	I0731 21:02:33.913263  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.913271  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:33.913282  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:33.913298  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:33.967112  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:33.967148  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:33.980961  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:33.980994  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:34.054886  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:34.054917  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:34.054939  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:34.143088  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:34.143127  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:36.687110  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:36.700649  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:36.700725  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:36.737796  188656 cri.go:89] found id: ""
	I0731 21:02:36.737829  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.737841  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:36.737849  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:36.737916  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:36.773010  188656 cri.go:89] found id: ""
	I0731 21:02:36.773048  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.773059  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:36.773067  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:36.773136  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:36.813945  188656 cri.go:89] found id: ""
	I0731 21:02:36.813978  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.813988  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:36.813994  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:36.814047  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:36.849826  188656 cri.go:89] found id: ""
	I0731 21:02:36.849860  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.849872  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:36.849880  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:36.849943  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:36.887200  188656 cri.go:89] found id: ""
	I0731 21:02:36.887233  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.887244  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:36.887253  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:36.887391  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:36.922529  188656 cri.go:89] found id: ""
	I0731 21:02:36.922562  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.922573  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:36.922582  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:36.922644  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:36.958119  188656 cri.go:89] found id: ""
	I0731 21:02:36.958154  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.958166  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:36.958174  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:36.958240  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:37.001071  188656 cri.go:89] found id: ""
	I0731 21:02:37.001104  188656 logs.go:276] 0 containers: []
	W0731 21:02:37.001113  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:37.001123  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:37.001136  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:37.041248  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:37.041288  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:37.100519  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:37.100558  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:37.115157  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:37.115188  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:37.191232  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:37.191259  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:37.191277  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:36.588213  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:38.589224  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:36.109307  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:38.110675  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:40.111284  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:39.514052  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:42.013265  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:39.772834  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:39.788137  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:39.788203  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:39.827329  188656 cri.go:89] found id: ""
	I0731 21:02:39.827361  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.827371  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:39.827378  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:39.827458  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:39.864855  188656 cri.go:89] found id: ""
	I0731 21:02:39.864882  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.864889  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:39.864897  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:39.864958  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:39.901955  188656 cri.go:89] found id: ""
	I0731 21:02:39.901981  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.901990  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:39.901996  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:39.902059  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:39.941376  188656 cri.go:89] found id: ""
	I0731 21:02:39.941402  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.941412  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:39.941418  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:39.941473  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:39.975321  188656 cri.go:89] found id: ""
	I0731 21:02:39.975352  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.975364  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:39.975394  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:39.975465  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:40.010106  188656 cri.go:89] found id: ""
	I0731 21:02:40.010136  188656 logs.go:276] 0 containers: []
	W0731 21:02:40.010148  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:40.010157  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:40.010220  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:40.043963  188656 cri.go:89] found id: ""
	I0731 21:02:40.043997  188656 logs.go:276] 0 containers: []
	W0731 21:02:40.044009  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:40.044017  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:40.044089  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:40.079178  188656 cri.go:89] found id: ""
	I0731 21:02:40.079216  188656 logs.go:276] 0 containers: []
	W0731 21:02:40.079224  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:40.079234  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:40.079248  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:40.141115  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:40.141158  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:40.156722  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:40.156758  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:40.233758  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:40.233782  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:40.233797  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:40.317316  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:40.317375  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:42.858649  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:42.872135  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:42.872221  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:42.911966  188656 cri.go:89] found id: ""
	I0731 21:02:42.911998  188656 logs.go:276] 0 containers: []
	W0731 21:02:42.912007  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:42.912014  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:42.912081  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:42.950036  188656 cri.go:89] found id: ""
	I0731 21:02:42.950070  188656 logs.go:276] 0 containers: []
	W0731 21:02:42.950079  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:42.950085  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:42.950138  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:42.987201  188656 cri.go:89] found id: ""
	I0731 21:02:42.987233  188656 logs.go:276] 0 containers: []
	W0731 21:02:42.987245  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:42.987253  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:42.987326  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:43.027250  188656 cri.go:89] found id: ""
	I0731 21:02:43.027285  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.027297  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:43.027306  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:43.027374  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:43.063419  188656 cri.go:89] found id: ""
	I0731 21:02:43.063448  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.063456  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:43.063463  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:43.063527  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:43.101155  188656 cri.go:89] found id: ""
	I0731 21:02:43.101184  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.101193  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:43.101199  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:43.101249  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:43.142633  188656 cri.go:89] found id: ""
	I0731 21:02:43.142658  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.142667  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:43.142675  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:43.142741  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:43.177747  188656 cri.go:89] found id: ""
	I0731 21:02:43.177780  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.177789  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:43.177799  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:43.177813  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:43.228074  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:43.228114  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:43.242132  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:43.242165  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:43.313026  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:43.313054  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:43.313072  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:43.394620  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:43.394663  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:40.589306  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:42.589428  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:42.612236  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:45.110401  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:44.513370  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:46.514350  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:45.937932  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:45.951871  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:45.951964  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:45.987615  188656 cri.go:89] found id: ""
	I0731 21:02:45.987642  188656 logs.go:276] 0 containers: []
	W0731 21:02:45.987650  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:45.987656  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:45.987715  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:46.022632  188656 cri.go:89] found id: ""
	I0731 21:02:46.022659  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.022667  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:46.022674  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:46.022746  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:46.061153  188656 cri.go:89] found id: ""
	I0731 21:02:46.061182  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.061191  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:46.061196  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:46.061246  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:46.099168  188656 cri.go:89] found id: ""
	I0731 21:02:46.099197  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.099206  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:46.099212  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:46.099266  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:46.137269  188656 cri.go:89] found id: ""
	I0731 21:02:46.137300  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.137312  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:46.137321  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:46.137403  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:46.172330  188656 cri.go:89] found id: ""
	I0731 21:02:46.172391  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.172404  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:46.172417  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:46.172489  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:46.213314  188656 cri.go:89] found id: ""
	I0731 21:02:46.213358  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.213370  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:46.213378  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:46.213451  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:46.248663  188656 cri.go:89] found id: ""
	I0731 21:02:46.248697  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.248707  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:46.248719  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:46.248735  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:46.305433  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:46.305472  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:46.319065  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:46.319098  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:46.387025  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:46.387046  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:46.387058  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:46.476721  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:46.476769  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:44.589757  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:47.089954  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:47.112823  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:49.114163  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:49.014193  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:51.014760  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:49.020882  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:49.036502  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:49.036573  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:49.076478  188656 cri.go:89] found id: ""
	I0731 21:02:49.076509  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.076518  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:49.076525  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:49.076578  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:49.116065  188656 cri.go:89] found id: ""
	I0731 21:02:49.116098  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.116106  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:49.116112  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:49.116168  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:49.153237  188656 cri.go:89] found id: ""
	I0731 21:02:49.153274  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.153287  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:49.153295  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:49.153385  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:49.192821  188656 cri.go:89] found id: ""
	I0731 21:02:49.192849  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.192858  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:49.192864  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:49.192918  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:49.230627  188656 cri.go:89] found id: ""
	I0731 21:02:49.230660  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.230671  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:49.230679  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:49.230749  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:49.266575  188656 cri.go:89] found id: ""
	I0731 21:02:49.266603  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.266611  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:49.266617  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:49.266688  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:49.312489  188656 cri.go:89] found id: ""
	I0731 21:02:49.312522  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.312533  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:49.312541  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:49.312613  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:49.348907  188656 cri.go:89] found id: ""
	I0731 21:02:49.348932  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.348941  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:49.348950  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:49.348965  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:49.363229  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:49.363267  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:49.435708  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:49.435732  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:49.435745  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:49.522002  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:49.522047  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:49.566823  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:49.566868  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:52.122660  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:52.136559  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:52.136629  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:52.173198  188656 cri.go:89] found id: ""
	I0731 21:02:52.173227  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.173236  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:52.173242  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:52.173310  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:52.208464  188656 cri.go:89] found id: ""
	I0731 21:02:52.208503  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.208514  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:52.208521  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:52.208590  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:52.246052  188656 cri.go:89] found id: ""
	I0731 21:02:52.246084  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.246091  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:52.246098  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:52.246160  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:52.281798  188656 cri.go:89] found id: ""
	I0731 21:02:52.281831  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.281843  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:52.281852  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:52.281918  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:52.318924  188656 cri.go:89] found id: ""
	I0731 21:02:52.318954  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.318975  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:52.318983  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:52.319052  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:52.356752  188656 cri.go:89] found id: ""
	I0731 21:02:52.356788  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.356800  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:52.356809  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:52.356874  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:52.391507  188656 cri.go:89] found id: ""
	I0731 21:02:52.391537  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.391545  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:52.391551  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:52.391602  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:52.430714  188656 cri.go:89] found id: ""
	I0731 21:02:52.430749  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.430761  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:52.430774  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:52.430792  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:52.482600  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:52.482629  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:52.535317  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:52.535361  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:52.549835  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:52.549874  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:52.628319  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:52.628347  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:52.628365  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:49.590499  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:52.089170  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:54.089832  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:51.610237  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:54.112782  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:53.513932  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:55.516784  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:55.216678  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:55.231142  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:55.231225  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:55.266283  188656 cri.go:89] found id: ""
	I0731 21:02:55.266321  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.266334  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:55.266341  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:55.266399  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:55.301457  188656 cri.go:89] found id: ""
	I0731 21:02:55.301493  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.301506  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:55.301514  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:55.301574  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:55.338427  188656 cri.go:89] found id: ""
	I0731 21:02:55.338453  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.338461  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:55.338467  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:55.338521  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:55.373718  188656 cri.go:89] found id: ""
	I0731 21:02:55.373748  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.373757  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:55.373764  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:55.373846  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:55.410989  188656 cri.go:89] found id: ""
	I0731 21:02:55.411022  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.411034  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:55.411042  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:55.411100  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:55.452867  188656 cri.go:89] found id: ""
	I0731 21:02:55.452904  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.452915  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:55.452924  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:55.452995  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:55.512781  188656 cri.go:89] found id: ""
	I0731 21:02:55.512809  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.512821  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:55.512829  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:55.512894  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:55.550460  188656 cri.go:89] found id: ""
	I0731 21:02:55.550487  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.550495  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:55.550505  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:55.550521  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:55.625776  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:55.625804  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:55.625821  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:55.711276  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:55.711322  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:55.765078  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:55.765111  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:55.818131  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:55.818176  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:58.332914  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:58.346908  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:58.346992  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:58.383641  188656 cri.go:89] found id: ""
	I0731 21:02:58.383686  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.383695  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:58.383700  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:58.383753  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:58.419538  188656 cri.go:89] found id: ""
	I0731 21:02:58.419566  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.419576  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:58.419584  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:58.419649  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:58.457036  188656 cri.go:89] found id: ""
	I0731 21:02:58.457069  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.457080  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:58.457088  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:58.457162  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:58.497596  188656 cri.go:89] found id: ""
	I0731 21:02:58.497621  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.497629  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:58.497635  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:58.497706  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:58.538184  188656 cri.go:89] found id: ""
	I0731 21:02:58.538211  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.538220  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:58.538226  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:58.538291  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:58.584428  188656 cri.go:89] found id: ""
	I0731 21:02:58.584457  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.584468  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:58.584476  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:58.584537  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:58.625052  188656 cri.go:89] found id: ""
	I0731 21:02:58.625084  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.625096  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:58.625103  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:58.625171  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:58.662222  188656 cri.go:89] found id: ""
	I0731 21:02:58.662248  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.662256  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:58.662266  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:58.662278  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:58.740491  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:58.740530  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:58.782685  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:58.782714  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:58.833620  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:58.833668  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:56.091277  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:58.589516  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:56.609399  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:58.610957  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:58.013927  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:00.015179  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:58.848679  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:58.848713  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:58.925496  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:01.426171  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:01.440261  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:01.440341  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:01.477362  188656 cri.go:89] found id: ""
	I0731 21:03:01.477393  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.477405  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:01.477414  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:01.477483  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:01.516640  188656 cri.go:89] found id: ""
	I0731 21:03:01.516675  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.516692  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:01.516701  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:01.516764  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:01.560713  188656 cri.go:89] found id: ""
	I0731 21:03:01.560744  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.560756  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:01.560762  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:01.560844  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:01.604050  188656 cri.go:89] found id: ""
	I0731 21:03:01.604086  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.604097  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:01.604105  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:01.604170  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:01.641358  188656 cri.go:89] found id: ""
	I0731 21:03:01.641391  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.641401  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:01.641406  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:01.641471  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:01.677332  188656 cri.go:89] found id: ""
	I0731 21:03:01.677380  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.677390  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:01.677397  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:01.677459  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:01.713781  188656 cri.go:89] found id: ""
	I0731 21:03:01.713815  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.713826  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:01.713833  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:01.713914  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:01.757499  188656 cri.go:89] found id: ""
	I0731 21:03:01.757543  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.757552  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:01.757563  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:01.757575  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:01.832330  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:01.832370  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:01.832384  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:01.918996  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:01.919050  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:01.979268  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:01.979307  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:02.037528  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:02.037564  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:00.591373  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:03.089405  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:01.110471  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:03.611348  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:02.513998  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:05.015060  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:04.552758  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:04.566881  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:04.566960  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:04.604631  188656 cri.go:89] found id: ""
	I0731 21:03:04.604669  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.604680  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:04.604688  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:04.604791  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:04.644027  188656 cri.go:89] found id: ""
	I0731 21:03:04.644052  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.644061  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:04.644068  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:04.644134  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:04.680010  188656 cri.go:89] found id: ""
	I0731 21:03:04.680037  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.680045  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:04.680050  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:04.680102  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:04.717095  188656 cri.go:89] found id: ""
	I0731 21:03:04.717123  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.717133  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:04.717140  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:04.717212  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:04.755297  188656 cri.go:89] found id: ""
	I0731 21:03:04.755324  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.755331  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:04.755337  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:04.755387  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:04.792073  188656 cri.go:89] found id: ""
	I0731 21:03:04.792104  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.792113  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:04.792119  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:04.792168  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:04.828428  188656 cri.go:89] found id: ""
	I0731 21:03:04.828460  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.828468  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:04.828475  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:04.828541  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:04.863871  188656 cri.go:89] found id: ""
	I0731 21:03:04.863905  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.863916  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:04.863929  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:04.863946  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:04.879591  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:04.879626  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:04.962199  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:04.962227  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:04.962245  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:05.048502  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:05.048547  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:05.090812  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:05.090838  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:07.647307  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:07.664586  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:07.664656  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:07.719851  188656 cri.go:89] found id: ""
	I0731 21:03:07.719887  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.719899  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:07.719908  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:07.719978  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:07.778295  188656 cri.go:89] found id: ""
	I0731 21:03:07.778330  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.778343  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:07.778350  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:07.778417  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:07.817911  188656 cri.go:89] found id: ""
	I0731 21:03:07.817937  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.817947  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:07.817954  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:07.818004  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:07.853177  188656 cri.go:89] found id: ""
	I0731 21:03:07.853211  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.853222  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:07.853229  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:07.853308  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:07.888992  188656 cri.go:89] found id: ""
	I0731 21:03:07.889020  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.889046  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:07.889055  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:07.889133  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:07.924327  188656 cri.go:89] found id: ""
	I0731 21:03:07.924358  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.924369  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:07.924377  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:07.924461  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:07.964438  188656 cri.go:89] found id: ""
	I0731 21:03:07.964470  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.964480  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:07.964489  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:07.964572  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:08.003566  188656 cri.go:89] found id: ""
	I0731 21:03:08.003610  188656 logs.go:276] 0 containers: []
	W0731 21:03:08.003621  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:08.003634  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:08.003651  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:08.044246  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:08.044286  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:08.097479  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:08.097517  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:08.113636  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:08.113663  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:08.187217  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:08.187244  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:08.187261  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:05.090205  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:07.589488  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:06.110184  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:08.111598  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:10.611986  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:07.513036  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:09.513637  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:11.514176  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:10.771248  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:10.786159  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:10.786232  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:10.823724  188656 cri.go:89] found id: ""
	I0731 21:03:10.823756  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.823769  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:10.823777  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:10.823846  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:10.862440  188656 cri.go:89] found id: ""
	I0731 21:03:10.862468  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.862480  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:10.862488  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:10.862544  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:10.901499  188656 cri.go:89] found id: ""
	I0731 21:03:10.901527  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.901539  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:10.901547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:10.901611  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:10.940255  188656 cri.go:89] found id: ""
	I0731 21:03:10.940279  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.940287  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:10.940293  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:10.940356  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:10.975315  188656 cri.go:89] found id: ""
	I0731 21:03:10.975344  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.975353  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:10.975360  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:10.975420  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:11.011453  188656 cri.go:89] found id: ""
	I0731 21:03:11.011482  188656 logs.go:276] 0 containers: []
	W0731 21:03:11.011538  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:11.011549  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:11.011611  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:11.047846  188656 cri.go:89] found id: ""
	I0731 21:03:11.047887  188656 logs.go:276] 0 containers: []
	W0731 21:03:11.047899  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:11.047907  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:11.047972  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:11.086243  188656 cri.go:89] found id: ""
	I0731 21:03:11.086271  188656 logs.go:276] 0 containers: []
	W0731 21:03:11.086282  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:11.086293  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:11.086309  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:11.139390  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:11.139430  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:11.154637  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:11.154669  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:11.225996  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:11.226019  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:11.226035  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:11.305235  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:11.305280  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:09.589831  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:11.590312  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:14.089750  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:13.110191  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:15.112258  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:14.013609  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:16.014143  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:13.845792  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:13.859185  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:13.859261  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:13.896017  188656 cri.go:89] found id: ""
	I0731 21:03:13.896047  188656 logs.go:276] 0 containers: []
	W0731 21:03:13.896055  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:13.896061  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:13.896123  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:13.932442  188656 cri.go:89] found id: ""
	I0731 21:03:13.932475  188656 logs.go:276] 0 containers: []
	W0731 21:03:13.932486  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:13.932494  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:13.932564  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:13.971233  188656 cri.go:89] found id: ""
	I0731 21:03:13.971265  188656 logs.go:276] 0 containers: []
	W0731 21:03:13.971274  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:13.971280  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:13.971331  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:14.009757  188656 cri.go:89] found id: ""
	I0731 21:03:14.009787  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.009796  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:14.009805  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:14.009870  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:14.047946  188656 cri.go:89] found id: ""
	I0731 21:03:14.047979  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.047990  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:14.047998  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:14.048056  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:14.084687  188656 cri.go:89] found id: ""
	I0731 21:03:14.084720  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.084731  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:14.084739  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:14.084805  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:14.124831  188656 cri.go:89] found id: ""
	I0731 21:03:14.124861  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.124870  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:14.124876  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:14.124929  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:14.161242  188656 cri.go:89] found id: ""
	I0731 21:03:14.161275  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.161286  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:14.161295  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:14.161308  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:14.241060  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:14.241115  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:14.282382  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:14.282414  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:14.335201  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:14.335249  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:14.351345  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:14.351379  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:14.436524  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:16.937313  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:16.951403  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:16.951490  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:16.991735  188656 cri.go:89] found id: ""
	I0731 21:03:16.991766  188656 logs.go:276] 0 containers: []
	W0731 21:03:16.991777  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:16.991785  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:16.991852  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:17.030327  188656 cri.go:89] found id: ""
	I0731 21:03:17.030353  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.030360  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:17.030366  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:17.030419  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:17.068161  188656 cri.go:89] found id: ""
	I0731 21:03:17.068195  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.068206  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:17.068214  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:17.068286  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:17.105561  188656 cri.go:89] found id: ""
	I0731 21:03:17.105590  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.105601  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:17.105609  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:17.105684  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:17.144503  188656 cri.go:89] found id: ""
	I0731 21:03:17.144529  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.144540  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:17.144547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:17.144610  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:17.183709  188656 cri.go:89] found id: ""
	I0731 21:03:17.183738  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.183747  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:17.183753  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:17.183815  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:17.222083  188656 cri.go:89] found id: ""
	I0731 21:03:17.222109  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.222117  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:17.222124  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:17.222178  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:17.259503  188656 cri.go:89] found id: ""
	I0731 21:03:17.259534  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.259547  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:17.259561  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:17.259578  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:17.300603  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:17.300642  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:17.352194  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:17.352235  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:17.367179  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:17.367209  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:17.440051  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:17.440074  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:17.440088  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:16.589914  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:18.082985  188133 pod_ready.go:81] duration metric: took 4m0.000734125s for pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace to be "Ready" ...
	E0731 21:03:18.083015  188133 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0731 21:03:18.083039  188133 pod_ready.go:38] duration metric: took 4m12.543404692s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:03:18.083069  188133 kubeadm.go:597] duration metric: took 4m20.473129745s to restartPrimaryControlPlane
	W0731 21:03:18.083176  188133 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 21:03:18.083210  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:03:17.610274  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:19.611592  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:18.514266  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:20.514967  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:20.027644  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:20.041735  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:20.041826  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:20.077436  188656 cri.go:89] found id: ""
	I0731 21:03:20.077470  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.077483  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:20.077491  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:20.077558  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:20.117420  188656 cri.go:89] found id: ""
	I0731 21:03:20.117449  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.117459  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:20.117466  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:20.117533  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:20.157794  188656 cri.go:89] found id: ""
	I0731 21:03:20.157827  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.157838  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:20.157847  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:20.157914  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:20.193760  188656 cri.go:89] found id: ""
	I0731 21:03:20.193788  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.193796  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:20.193803  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:20.193856  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:20.231731  188656 cri.go:89] found id: ""
	I0731 21:03:20.231764  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.231777  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:20.231785  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:20.231856  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:20.268666  188656 cri.go:89] found id: ""
	I0731 21:03:20.268697  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.268709  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:20.268717  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:20.268786  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:20.304355  188656 cri.go:89] found id: ""
	I0731 21:03:20.304392  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.304406  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:20.304414  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:20.304478  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:20.343886  188656 cri.go:89] found id: ""
	I0731 21:03:20.343915  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.343927  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:20.343940  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:20.343957  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:20.358460  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:20.358494  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:20.435473  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:20.435499  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:20.435522  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:20.517961  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:20.518002  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:20.561528  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:20.561567  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:23.119570  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:23.134276  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:23.134366  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:23.172808  188656 cri.go:89] found id: ""
	I0731 21:03:23.172837  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.172846  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:23.172852  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:23.172914  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:23.208038  188656 cri.go:89] found id: ""
	I0731 21:03:23.208067  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.208080  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:23.208086  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:23.208140  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:23.244493  188656 cri.go:89] found id: ""
	I0731 21:03:23.244523  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.244533  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:23.244539  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:23.244605  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:23.280474  188656 cri.go:89] found id: ""
	I0731 21:03:23.280503  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.280510  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:23.280517  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:23.280581  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:23.317381  188656 cri.go:89] found id: ""
	I0731 21:03:23.317415  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.317428  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:23.317441  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:23.317511  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:23.357023  188656 cri.go:89] found id: ""
	I0731 21:03:23.357051  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.357062  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:23.357071  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:23.357134  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:23.400176  188656 cri.go:89] found id: ""
	I0731 21:03:23.400211  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.400223  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:23.400230  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:23.400298  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:23.440157  188656 cri.go:89] found id: ""
	I0731 21:03:23.440190  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.440201  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:23.440213  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:23.440234  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:23.494762  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:23.494802  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:23.511463  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:23.511510  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:23.600359  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:23.600383  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:23.600403  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:23.682683  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:23.682723  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:22.111495  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:24.112248  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:23.013460  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:25.014605  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:27.014900  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:26.225923  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:26.245708  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:26.245791  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:26.282882  188656 cri.go:89] found id: ""
	I0731 21:03:26.282910  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.282920  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:26.282928  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:26.282987  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:26.324227  188656 cri.go:89] found id: ""
	I0731 21:03:26.324268  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.324279  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:26.324287  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:26.324349  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:26.365996  188656 cri.go:89] found id: ""
	I0731 21:03:26.366027  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.366038  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:26.366047  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:26.366119  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:26.403790  188656 cri.go:89] found id: ""
	I0731 21:03:26.403823  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.403835  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:26.403844  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:26.403915  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:26.442924  188656 cri.go:89] found id: ""
	I0731 21:03:26.442947  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.442957  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:26.442964  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:26.443026  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:26.482260  188656 cri.go:89] found id: ""
	I0731 21:03:26.482286  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.482294  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:26.482300  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:26.482364  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:26.526385  188656 cri.go:89] found id: ""
	I0731 21:03:26.526420  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.526432  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:26.526442  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:26.526511  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:26.565217  188656 cri.go:89] found id: ""
	I0731 21:03:26.565250  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.565262  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:26.565275  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:26.565294  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:26.623437  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:26.623478  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:26.639642  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:26.639683  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:26.720274  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:26.720309  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:26.720325  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:26.799689  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:26.799728  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:26.111147  188266 pod_ready.go:81] duration metric: took 4m0.007359775s for pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace to be "Ready" ...
	E0731 21:03:26.111173  188266 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0731 21:03:26.111180  188266 pod_ready.go:38] duration metric: took 4m2.82978193s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:03:26.111195  188266 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:03:26.111220  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:26.111267  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:26.179210  188266 cri.go:89] found id: "89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:26.179240  188266 cri.go:89] found id: ""
	I0731 21:03:26.179251  188266 logs.go:276] 1 containers: [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718]
	I0731 21:03:26.179315  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.184349  188266 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:26.184430  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:26.221238  188266 cri.go:89] found id: "d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:26.221267  188266 cri.go:89] found id: ""
	I0731 21:03:26.221277  188266 logs.go:276] 1 containers: [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c]
	I0731 21:03:26.221349  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.225908  188266 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:26.225985  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:26.276864  188266 cri.go:89] found id: "987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:26.276895  188266 cri.go:89] found id: ""
	I0731 21:03:26.276907  188266 logs.go:276] 1 containers: [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025]
	I0731 21:03:26.276974  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.281921  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:26.282003  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:26.320868  188266 cri.go:89] found id: "936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:26.320903  188266 cri.go:89] found id: ""
	I0731 21:03:26.320914  188266 logs.go:276] 1 containers: [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447]
	I0731 21:03:26.320984  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.326203  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:26.326272  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:26.378409  188266 cri.go:89] found id: "c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:26.378433  188266 cri.go:89] found id: ""
	I0731 21:03:26.378442  188266 logs.go:276] 1 containers: [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e]
	I0731 21:03:26.378504  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.384006  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:26.384111  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:26.431113  188266 cri.go:89] found id: "c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:26.431147  188266 cri.go:89] found id: ""
	I0731 21:03:26.431158  188266 logs.go:276] 1 containers: [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085]
	I0731 21:03:26.431226  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.437136  188266 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:26.437213  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:26.484223  188266 cri.go:89] found id: ""
	I0731 21:03:26.484247  188266 logs.go:276] 0 containers: []
	W0731 21:03:26.484257  188266 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:26.484263  188266 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:03:26.484319  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:03:26.530433  188266 cri.go:89] found id: "701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:26.530470  188266 cri.go:89] found id: "23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:26.530476  188266 cri.go:89] found id: ""
	I0731 21:03:26.530486  188266 logs.go:276] 2 containers: [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f]
	I0731 21:03:26.530551  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.535747  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.541379  188266 logs.go:123] Gathering logs for storage-provisioner [23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f] ...
	I0731 21:03:26.541406  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:26.586730  188266 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:26.586769  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:27.133617  188266 logs.go:123] Gathering logs for kube-apiserver [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718] ...
	I0731 21:03:27.133672  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:27.183805  188266 logs.go:123] Gathering logs for etcd [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c] ...
	I0731 21:03:27.183846  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:27.226579  188266 logs.go:123] Gathering logs for kube-proxy [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e] ...
	I0731 21:03:27.226620  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:27.290635  188266 logs.go:123] Gathering logs for coredns [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025] ...
	I0731 21:03:27.290671  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:27.330700  188266 logs.go:123] Gathering logs for kube-scheduler [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447] ...
	I0731 21:03:27.330732  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:27.370882  188266 logs.go:123] Gathering logs for kube-controller-manager [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085] ...
	I0731 21:03:27.370918  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:27.426426  188266 logs.go:123] Gathering logs for storage-provisioner [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173] ...
	I0731 21:03:27.426471  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:27.466359  188266 logs.go:123] Gathering logs for container status ...
	I0731 21:03:27.466396  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:27.515202  188266 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:27.515235  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:27.569081  188266 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:27.569122  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:27.586776  188266 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:27.586809  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:03:30.223314  188266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:30.241046  188266 api_server.go:72] duration metric: took 4m14.179869513s to wait for apiserver process to appear ...
	I0731 21:03:30.241073  188266 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:03:30.241118  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:30.241188  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:30.281267  188266 cri.go:89] found id: "89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:30.281303  188266 cri.go:89] found id: ""
	I0731 21:03:30.281314  188266 logs.go:276] 1 containers: [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718]
	I0731 21:03:30.281397  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.285857  188266 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:30.285927  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:30.321742  188266 cri.go:89] found id: "d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:30.321770  188266 cri.go:89] found id: ""
	I0731 21:03:30.321779  188266 logs.go:276] 1 containers: [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c]
	I0731 21:03:30.321841  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.326210  188266 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:30.326284  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:30.367998  188266 cri.go:89] found id: "987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:30.368025  188266 cri.go:89] found id: ""
	I0731 21:03:30.368036  188266 logs.go:276] 1 containers: [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025]
	I0731 21:03:30.368101  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.372340  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:30.372412  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:30.413689  188266 cri.go:89] found id: "936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:30.413714  188266 cri.go:89] found id: ""
	I0731 21:03:30.413725  188266 logs.go:276] 1 containers: [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447]
	I0731 21:03:30.413789  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.418525  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:30.418604  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:30.458505  188266 cri.go:89] found id: "c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:30.458530  188266 cri.go:89] found id: ""
	I0731 21:03:30.458539  188266 logs.go:276] 1 containers: [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e]
	I0731 21:03:30.458587  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.462993  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:30.463058  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:30.500683  188266 cri.go:89] found id: "c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:30.500711  188266 cri.go:89] found id: ""
	I0731 21:03:30.500722  188266 logs.go:276] 1 containers: [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085]
	I0731 21:03:30.500785  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.506197  188266 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:30.506277  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:30.545243  188266 cri.go:89] found id: ""
	I0731 21:03:30.545273  188266 logs.go:276] 0 containers: []
	W0731 21:03:30.545284  188266 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:30.545290  188266 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:03:30.545371  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:03:30.588405  188266 cri.go:89] found id: "701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:30.588459  188266 cri.go:89] found id: "23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:30.588465  188266 cri.go:89] found id: ""
	I0731 21:03:30.588474  188266 logs.go:276] 2 containers: [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f]
	I0731 21:03:30.588539  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.593611  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.599345  188266 logs.go:123] Gathering logs for kube-proxy [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e] ...
	I0731 21:03:30.599386  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:30.641530  188266 logs.go:123] Gathering logs for kube-controller-manager [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085] ...
	I0731 21:03:30.641564  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:30.703655  188266 logs.go:123] Gathering logs for storage-provisioner [23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f] ...
	I0731 21:03:30.703692  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:30.744119  188266 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:30.744147  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:29.515238  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:32.014503  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:29.351214  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:29.365487  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:29.365561  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:29.402989  188656 cri.go:89] found id: ""
	I0731 21:03:29.403015  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.403022  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:29.403028  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:29.403079  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:29.443276  188656 cri.go:89] found id: ""
	I0731 21:03:29.443310  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.443321  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:29.443329  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:29.443397  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:29.483285  188656 cri.go:89] found id: ""
	I0731 21:03:29.483311  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.483319  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:29.483326  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:29.483384  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:29.522285  188656 cri.go:89] found id: ""
	I0731 21:03:29.522317  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.522329  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:29.522337  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:29.522406  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:29.565115  188656 cri.go:89] found id: ""
	I0731 21:03:29.565145  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.565155  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:29.565163  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:29.565233  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:29.603768  188656 cri.go:89] found id: ""
	I0731 21:03:29.603805  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.603816  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:29.603822  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:29.603875  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:29.640380  188656 cri.go:89] found id: ""
	I0731 21:03:29.640406  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.640416  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:29.640424  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:29.640493  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:29.679699  188656 cri.go:89] found id: ""
	I0731 21:03:29.679727  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.679736  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:29.679749  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:29.679764  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:29.735555  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:29.735603  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:29.749670  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:29.749708  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:29.825950  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:29.825973  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:29.825989  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:29.915420  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:29.915463  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:32.462996  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:32.478659  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:32.478739  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:32.528625  188656 cri.go:89] found id: ""
	I0731 21:03:32.528651  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.528659  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:32.528665  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:32.528724  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:32.574371  188656 cri.go:89] found id: ""
	I0731 21:03:32.574399  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.574408  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:32.574414  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:32.574474  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:32.616916  188656 cri.go:89] found id: ""
	I0731 21:03:32.616960  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.616970  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:32.616975  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:32.617040  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:32.657725  188656 cri.go:89] found id: ""
	I0731 21:03:32.657758  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.657769  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:32.657777  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:32.657842  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:32.693197  188656 cri.go:89] found id: ""
	I0731 21:03:32.693226  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.693237  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:32.693245  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:32.693316  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:32.733567  188656 cri.go:89] found id: ""
	I0731 21:03:32.733594  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.733602  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:32.733608  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:32.733670  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:32.774624  188656 cri.go:89] found id: ""
	I0731 21:03:32.774659  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.774671  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:32.774679  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:32.774747  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:32.811755  188656 cri.go:89] found id: ""
	I0731 21:03:32.811790  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.811809  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:32.811822  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:32.811835  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:32.825512  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:32.825544  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:32.902310  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:32.902339  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:32.902366  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:32.983347  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:32.983391  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:33.028037  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:33.028068  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:31.165988  188266 logs.go:123] Gathering logs for container status ...
	I0731 21:03:31.166042  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:31.209564  188266 logs.go:123] Gathering logs for kube-apiserver [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718] ...
	I0731 21:03:31.209605  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:31.254061  188266 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:31.254105  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:31.269227  188266 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:31.269266  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:03:31.394442  188266 logs.go:123] Gathering logs for etcd [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c] ...
	I0731 21:03:31.394477  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:31.439011  188266 logs.go:123] Gathering logs for coredns [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025] ...
	I0731 21:03:31.439047  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:31.476798  188266 logs.go:123] Gathering logs for kube-scheduler [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447] ...
	I0731 21:03:31.476825  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:31.524460  188266 logs.go:123] Gathering logs for storage-provisioner [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173] ...
	I0731 21:03:31.524491  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:31.564254  188266 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:31.564288  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:34.122836  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 21:03:34.128516  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 200:
	ok
	I0731 21:03:34.129484  188266 api_server.go:141] control plane version: v1.30.3
	I0731 21:03:34.129513  188266 api_server.go:131] duration metric: took 3.888432526s to wait for apiserver health ...
	I0731 21:03:34.129523  188266 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:03:34.129554  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:34.129622  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:34.167751  188266 cri.go:89] found id: "89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:34.167781  188266 cri.go:89] found id: ""
	I0731 21:03:34.167792  188266 logs.go:276] 1 containers: [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718]
	I0731 21:03:34.167860  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.172786  188266 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:34.172858  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:34.212172  188266 cri.go:89] found id: "d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:34.212204  188266 cri.go:89] found id: ""
	I0731 21:03:34.212215  188266 logs.go:276] 1 containers: [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c]
	I0731 21:03:34.212289  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.216651  188266 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:34.216736  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:34.263492  188266 cri.go:89] found id: "987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:34.263515  188266 cri.go:89] found id: ""
	I0731 21:03:34.263528  188266 logs.go:276] 1 containers: [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025]
	I0731 21:03:34.263592  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.268548  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:34.268630  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:34.309420  188266 cri.go:89] found id: "936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:34.309453  188266 cri.go:89] found id: ""
	I0731 21:03:34.309463  188266 logs.go:276] 1 containers: [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447]
	I0731 21:03:34.309529  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.313921  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:34.313993  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:34.354712  188266 cri.go:89] found id: "c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:34.354740  188266 cri.go:89] found id: ""
	I0731 21:03:34.354754  188266 logs.go:276] 1 containers: [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e]
	I0731 21:03:34.354818  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.359363  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:34.359446  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:34.397596  188266 cri.go:89] found id: "c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:34.397622  188266 cri.go:89] found id: ""
	I0731 21:03:34.397634  188266 logs.go:276] 1 containers: [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085]
	I0731 21:03:34.397710  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.402126  188266 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:34.402207  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:34.447198  188266 cri.go:89] found id: ""
	I0731 21:03:34.447234  188266 logs.go:276] 0 containers: []
	W0731 21:03:34.447242  188266 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:34.447248  188266 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:03:34.447304  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:03:34.487429  188266 cri.go:89] found id: "701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:34.487452  188266 cri.go:89] found id: "23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:34.487457  188266 cri.go:89] found id: ""
	I0731 21:03:34.487464  188266 logs.go:276] 2 containers: [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f]
	I0731 21:03:34.487519  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.494362  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.499409  188266 logs.go:123] Gathering logs for etcd [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c] ...
	I0731 21:03:34.499438  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:34.549761  188266 logs.go:123] Gathering logs for kube-proxy [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e] ...
	I0731 21:03:34.549802  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:34.588571  188266 logs.go:123] Gathering logs for kube-controller-manager [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085] ...
	I0731 21:03:34.588603  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:34.646590  188266 logs.go:123] Gathering logs for storage-provisioner [23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f] ...
	I0731 21:03:34.646635  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:34.691320  188266 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:34.691353  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:35.098975  188266 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:35.099018  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:35.153924  188266 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:35.153964  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:35.168091  188266 logs.go:123] Gathering logs for kube-apiserver [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718] ...
	I0731 21:03:35.168121  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:35.214469  188266 logs.go:123] Gathering logs for container status ...
	I0731 21:03:35.214511  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:35.260694  188266 logs.go:123] Gathering logs for storage-provisioner [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173] ...
	I0731 21:03:35.260724  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:35.299230  188266 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:35.299261  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:03:35.413598  188266 logs.go:123] Gathering logs for coredns [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025] ...
	I0731 21:03:35.413635  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:35.451331  188266 logs.go:123] Gathering logs for kube-scheduler [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447] ...
	I0731 21:03:35.451359  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:35.582896  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:35.597483  188656 kubeadm.go:597] duration metric: took 4m3.860422558s to restartPrimaryControlPlane
	W0731 21:03:35.597559  188656 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 21:03:35.597598  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:03:36.054326  188656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:03:36.070199  188656 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:03:36.081882  188656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:03:36.093300  188656 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:03:36.093322  188656 kubeadm.go:157] found existing configuration files:
	
	I0731 21:03:36.093396  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:03:36.103781  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:03:36.103843  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:03:36.114702  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:03:36.125213  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:03:36.125299  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:03:36.136299  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:03:36.146441  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:03:36.146520  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:03:36.157524  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:03:36.168247  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:03:36.168327  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:03:36.178875  188656 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:03:36.253662  188656 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 21:03:36.253804  188656 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:03:36.401385  188656 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:03:36.401550  188656 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:03:36.401686  188656 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:03:36.591601  188656 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:03:34.513632  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:36.515043  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:36.593492  188656 out.go:204]   - Generating certificates and keys ...
	I0731 21:03:36.593604  188656 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:03:36.593690  188656 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:03:36.593817  188656 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:03:36.593907  188656 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:03:36.594011  188656 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:03:36.594090  188656 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:03:36.594215  188656 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:03:36.594602  188656 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:03:36.595122  188656 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:03:36.595323  188656 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:03:36.595414  188656 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:03:36.595548  188656 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:03:37.052958  188656 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:03:37.178980  188656 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:03:37.375085  188656 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:03:37.550735  188656 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:03:37.571991  188656 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:03:37.575050  188656 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:03:37.575227  188656 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:03:37.707194  188656 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:03:37.997696  188266 system_pods.go:59] 8 kube-system pods found
	I0731 21:03:37.997725  188266 system_pods.go:61] "coredns-7db6d8ff4d-gnrgs" [203ddf96-11cf-4fd3-8920-aa787815ad1a] Running
	I0731 21:03:37.997730  188266 system_pods.go:61] "etcd-default-k8s-diff-port-125614" [7b9a74f5-b8df-457d-b4be-26e3f268e74a] Running
	I0731 21:03:37.997734  188266 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-125614" [a2db9a6a-1d7f-4bb6-9f54-2d74676bba7f] Running
	I0731 21:03:37.997738  188266 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-125614" [3cd52038-e450-46ea-91a7-494bdfeb386e] Running
	I0731 21:03:37.997741  188266 system_pods.go:61] "kube-proxy-csdc4" [24077c7d-f54c-4a54-9791-742327f2a9d0] Running
	I0731 21:03:37.997744  188266 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-125614" [4b9b4f87-74a3-4768-b3ca-3226a4db7105] Running
	I0731 21:03:37.997750  188266 system_pods.go:61] "metrics-server-569cc877fc-jf52w" [00b07830-8180-43c0-83c7-e68d399ae0ef] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:03:37.997754  188266 system_pods.go:61] "storage-provisioner" [efc60c19-af1b-426e-82e2-5fb9a2d1fb3a] Running
	I0731 21:03:37.997762  188266 system_pods.go:74] duration metric: took 3.868231958s to wait for pod list to return data ...
	I0731 21:03:37.997773  188266 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:03:38.000640  188266 default_sa.go:45] found service account: "default"
	I0731 21:03:38.000665  188266 default_sa.go:55] duration metric: took 2.88647ms for default service account to be created ...
	I0731 21:03:38.000672  188266 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:03:38.007107  188266 system_pods.go:86] 8 kube-system pods found
	I0731 21:03:38.007132  188266 system_pods.go:89] "coredns-7db6d8ff4d-gnrgs" [203ddf96-11cf-4fd3-8920-aa787815ad1a] Running
	I0731 21:03:38.007137  188266 system_pods.go:89] "etcd-default-k8s-diff-port-125614" [7b9a74f5-b8df-457d-b4be-26e3f268e74a] Running
	I0731 21:03:38.007142  188266 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-125614" [a2db9a6a-1d7f-4bb6-9f54-2d74676bba7f] Running
	I0731 21:03:38.007146  188266 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-125614" [3cd52038-e450-46ea-91a7-494bdfeb386e] Running
	I0731 21:03:38.007152  188266 system_pods.go:89] "kube-proxy-csdc4" [24077c7d-f54c-4a54-9791-742327f2a9d0] Running
	I0731 21:03:38.007158  188266 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-125614" [4b9b4f87-74a3-4768-b3ca-3226a4db7105] Running
	I0731 21:03:38.007164  188266 system_pods.go:89] "metrics-server-569cc877fc-jf52w" [00b07830-8180-43c0-83c7-e68d399ae0ef] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:03:38.007168  188266 system_pods.go:89] "storage-provisioner" [efc60c19-af1b-426e-82e2-5fb9a2d1fb3a] Running
	I0731 21:03:38.007175  188266 system_pods.go:126] duration metric: took 6.498733ms to wait for k8s-apps to be running ...
	I0731 21:03:38.007183  188266 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:03:38.007240  188266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:03:38.026906  188266 system_svc.go:56] duration metric: took 19.708653ms WaitForService to wait for kubelet
	I0731 21:03:38.026938  188266 kubeadm.go:582] duration metric: took 4m21.965767608s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:03:38.026969  188266 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:03:38.030479  188266 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:03:38.030554  188266 node_conditions.go:123] node cpu capacity is 2
	I0731 21:03:38.030577  188266 node_conditions.go:105] duration metric: took 3.601933ms to run NodePressure ...
	I0731 21:03:38.030600  188266 start.go:241] waiting for startup goroutines ...
	I0731 21:03:38.030611  188266 start.go:246] waiting for cluster config update ...
	I0731 21:03:38.030626  188266 start.go:255] writing updated cluster config ...
	I0731 21:03:38.031028  188266 ssh_runner.go:195] Run: rm -f paused
	I0731 21:03:38.082629  188266 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 21:03:38.084590  188266 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-125614" cluster and "default" namespace by default
	I0731 21:03:37.709295  188656 out.go:204]   - Booting up control plane ...
	I0731 21:03:37.709427  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:03:37.722549  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:03:37.723455  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:03:37.724194  188656 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:03:37.726323  188656 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 21:03:39.013773  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:41.016158  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:44.360883  188133 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.27764632s)
	I0731 21:03:44.360955  188133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:03:44.379069  188133 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:03:44.389518  188133 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:03:44.400223  188133 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:03:44.400250  188133 kubeadm.go:157] found existing configuration files:
	
	I0731 21:03:44.400302  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:03:44.410644  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:03:44.410718  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:03:44.421136  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:03:44.431161  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:03:44.431231  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:03:44.441936  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:03:44.451761  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:03:44.451820  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:03:44.462692  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:03:44.472982  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:03:44.473050  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:03:44.482980  188133 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:03:44.532539  188133 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0731 21:03:44.532637  188133 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:03:44.651505  188133 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:03:44.651654  188133 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:03:44.651772  188133 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0731 21:03:44.660564  188133 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:03:44.662559  188133 out.go:204]   - Generating certificates and keys ...
	I0731 21:03:44.662676  188133 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:03:44.662765  188133 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:03:44.662878  188133 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:03:44.662971  188133 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:03:44.663073  188133 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:03:44.663142  188133 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:03:44.663218  188133 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:03:44.663293  188133 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:03:44.663389  188133 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:03:44.663527  188133 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:03:44.663587  188133 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:03:44.663679  188133 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:03:44.813556  188133 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:03:44.908380  188133 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 21:03:45.005215  188133 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:03:45.138446  188133 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:03:45.222892  188133 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:03:45.223622  188133 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:03:45.226748  188133 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:03:43.513039  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:45.513901  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:45.228799  188133 out.go:204]   - Booting up control plane ...
	I0731 21:03:45.228934  188133 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:03:45.229087  188133 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:03:45.230021  188133 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:03:45.249145  188133 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:03:45.258184  188133 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:03:45.258267  188133 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:03:45.392726  188133 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 21:03:45.392852  188133 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 21:03:45.899754  188133 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 506.694095ms
	I0731 21:03:45.899857  188133 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 21:03:51.901713  188133 kubeadm.go:310] [api-check] The API server is healthy after 6.00194457s
	I0731 21:03:51.914947  188133 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 21:03:51.932510  188133 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 21:03:51.971055  188133 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 21:03:51.971273  188133 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-916885 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 21:03:51.985104  188133 kubeadm.go:310] [bootstrap-token] Using token: q86dx8.9ipyjyidvcwogxce
	I0731 21:03:47.515248  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:50.016206  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:51.986447  188133 out.go:204]   - Configuring RBAC rules ...
	I0731 21:03:51.986576  188133 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 21:03:51.993910  188133 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 21:03:52.002474  188133 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 21:03:52.007035  188133 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 21:03:52.011708  188133 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 21:03:52.020500  188133 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 21:03:52.310057  188133 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 21:03:52.778266  188133 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 21:03:53.308425  188133 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 21:03:53.309509  188133 kubeadm.go:310] 
	I0731 21:03:53.309585  188133 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 21:03:53.309597  188133 kubeadm.go:310] 
	I0731 21:03:53.309686  188133 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 21:03:53.309694  188133 kubeadm.go:310] 
	I0731 21:03:53.309715  188133 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 21:03:53.309771  188133 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 21:03:53.309875  188133 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 21:03:53.309894  188133 kubeadm.go:310] 
	I0731 21:03:53.310007  188133 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 21:03:53.310027  188133 kubeadm.go:310] 
	I0731 21:03:53.310088  188133 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 21:03:53.310099  188133 kubeadm.go:310] 
	I0731 21:03:53.310164  188133 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 21:03:53.310275  188133 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 21:03:53.310371  188133 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 21:03:53.310396  188133 kubeadm.go:310] 
	I0731 21:03:53.310499  188133 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 21:03:53.310601  188133 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 21:03:53.310611  188133 kubeadm.go:310] 
	I0731 21:03:53.310735  188133 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token q86dx8.9ipyjyidvcwogxce \
	I0731 21:03:53.310910  188133 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:227a67c5218b331dd5f7132ea28d97c9a8049ca33dc9adbb2077964e102f3559 \
	I0731 21:03:53.310961  188133 kubeadm.go:310] 	--control-plane 
	I0731 21:03:53.310970  188133 kubeadm.go:310] 
	I0731 21:03:53.311078  188133 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 21:03:53.311092  188133 kubeadm.go:310] 
	I0731 21:03:53.311222  188133 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token q86dx8.9ipyjyidvcwogxce \
	I0731 21:03:53.311402  188133 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:227a67c5218b331dd5f7132ea28d97c9a8049ca33dc9adbb2077964e102f3559 
	I0731 21:03:53.312409  188133 kubeadm.go:310] W0731 21:03:44.497219    2941 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0731 21:03:53.312703  188133 kubeadm.go:310] W0731 21:03:44.498106    2941 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0731 21:03:53.312811  188133 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:03:53.312857  188133 cni.go:84] Creating CNI manager for ""
	I0731 21:03:53.312870  188133 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:03:53.315035  188133 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:03:53.316406  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:03:53.327870  188133 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:03:53.352757  188133 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:03:53.352902  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:53.352919  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-916885 minikube.k8s.io/updated_at=2024_07_31T21_03_53_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825 minikube.k8s.io/name=no-preload-916885 minikube.k8s.io/primary=true
	I0731 21:03:53.403275  188133 ops.go:34] apiserver oom_adj: -16
	I0731 21:03:53.579520  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:54.080457  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:54.579898  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:55.080464  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:55.580211  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:56.080518  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:56.579806  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:57.080302  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:57.181987  188133 kubeadm.go:1113] duration metric: took 3.829153755s to wait for elevateKubeSystemPrivileges
	I0731 21:03:57.182024  188133 kubeadm.go:394] duration metric: took 4m59.623631766s to StartCluster
	I0731 21:03:57.182051  188133 settings.go:142] acquiring lock: {Name:mk1f43113b3e416d694056e4890ac819b020378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:03:57.182160  188133 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 21:03:57.185297  188133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:03:57.185586  188133 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.239 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:03:57.185672  188133 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:03:57.185753  188133 addons.go:69] Setting storage-provisioner=true in profile "no-preload-916885"
	I0731 21:03:57.185776  188133 addons.go:69] Setting default-storageclass=true in profile "no-preload-916885"
	I0731 21:03:57.185797  188133 addons.go:69] Setting metrics-server=true in profile "no-preload-916885"
	I0731 21:03:57.185825  188133 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-916885"
	I0731 21:03:57.185844  188133 addons.go:234] Setting addon metrics-server=true in "no-preload-916885"
	W0731 21:03:57.185856  188133 addons.go:243] addon metrics-server should already be in state true
	I0731 21:03:57.185864  188133 config.go:182] Loaded profile config "no-preload-916885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 21:03:57.185889  188133 host.go:66] Checking if "no-preload-916885" exists ...
	I0731 21:03:57.185785  188133 addons.go:234] Setting addon storage-provisioner=true in "no-preload-916885"
	W0731 21:03:57.185926  188133 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:03:57.185956  188133 host.go:66] Checking if "no-preload-916885" exists ...
	I0731 21:03:57.186201  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.186226  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.186247  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.186279  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.186301  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.186345  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.187280  188133 out.go:177] * Verifying Kubernetes components...
	I0731 21:03:57.188864  188133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:03:57.202393  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35433
	I0731 21:03:57.202431  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41921
	I0731 21:03:57.202856  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.202946  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.203416  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.203434  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.203688  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.203707  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.203829  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.204081  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.204270  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetState
	I0731 21:03:57.204428  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.204462  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.204960  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39305
	I0731 21:03:57.205722  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.206275  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.206291  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.208245  188133 addons.go:234] Setting addon default-storageclass=true in "no-preload-916885"
	W0731 21:03:57.208264  188133 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:03:57.208296  188133 host.go:66] Checking if "no-preload-916885" exists ...
	I0731 21:03:57.208640  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.208663  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.208866  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.209432  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.209458  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.222235  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44595
	I0731 21:03:57.222835  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.223408  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.223429  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.224137  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.224366  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetState
	I0731 21:03:57.226564  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 21:03:57.227398  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
	I0731 21:03:57.227842  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.228377  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.228399  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.228427  188133 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:03:57.228836  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.229521  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.229573  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.230036  188133 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:03:57.230056  188133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:03:57.230075  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 21:03:57.230207  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46451
	I0731 21:03:57.230601  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.230993  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.231008  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.231323  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.231519  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetState
	I0731 21:03:57.233542  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 21:03:57.235239  188133 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 21:03:52.514632  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:55.014017  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:57.235631  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.236081  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 21:03:57.236105  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.236374  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 21:03:57.236478  188133 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:03:57.236493  188133 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:03:57.236510  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 21:03:57.236545  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 21:03:57.236711  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 21:03:57.236824  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 21:03:57.238988  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.239335  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 21:03:57.239361  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.239482  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 21:03:57.239645  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 21:03:57.239775  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 21:03:57.239902  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 21:03:57.252386  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40865
	I0731 21:03:57.252846  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.253454  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.253474  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.253837  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.254048  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetState
	I0731 21:03:57.255784  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 21:03:57.256020  188133 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:03:57.256037  188133 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:03:57.256057  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 21:03:57.258870  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.259220  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 21:03:57.259254  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.259446  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 21:03:57.259612  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 21:03:57.259783  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 21:03:57.259940  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 21:03:57.405243  188133 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:03:57.426852  188133 node_ready.go:35] waiting up to 6m0s for node "no-preload-916885" to be "Ready" ...
	I0731 21:03:57.494325  188133 node_ready.go:49] node "no-preload-916885" has status "Ready":"True"
	I0731 21:03:57.494352  188133 node_ready.go:38] duration metric: took 67.471516ms for node "no-preload-916885" to be "Ready" ...
	I0731 21:03:57.494365  188133 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:03:57.497819  188133 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:03:57.497849  188133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 21:03:57.528118  188133 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:03:57.528148  188133 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:03:57.557889  188133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:03:57.568872  188133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:03:57.583099  188133 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-9qnjq" in "kube-system" namespace to be "Ready" ...
	I0731 21:03:57.587315  188133 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:03:57.587342  188133 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:03:57.645504  188133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:03:58.515635  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.515650  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.515667  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.515675  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.516054  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.516100  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.516117  188133 main.go:141] libmachine: (no-preload-916885) DBG | Closing plugin on server side
	I0731 21:03:58.516128  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.516128  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.516161  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.516187  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.516141  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.516213  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.516097  188133 main.go:141] libmachine: (no-preload-916885) DBG | Closing plugin on server side
	I0731 21:03:58.516431  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.516444  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.517889  188133 main.go:141] libmachine: (no-preload-916885) DBG | Closing plugin on server side
	I0731 21:03:58.517914  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.517930  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.569097  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.569120  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.569463  188133 main.go:141] libmachine: (no-preload-916885) DBG | Closing plugin on server side
	I0731 21:03:58.569511  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.569520  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.726076  188133 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.080526254s)
	I0731 21:03:58.726140  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.726153  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.726469  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.726490  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.726501  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.726514  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.728603  188133 main.go:141] libmachine: (no-preload-916885) DBG | Closing plugin on server side
	I0731 21:03:58.728666  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.728688  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.728715  188133 addons.go:475] Verifying addon metrics-server=true in "no-preload-916885"
	I0731 21:03:58.730520  188133 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 21:03:58.731823  188133 addons.go:510] duration metric: took 1.546157188s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 21:03:57.515366  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:59.515730  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:02.013803  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:59.593082  188133 pod_ready.go:102] pod "coredns-5cfdc65f69-9qnjq" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:00.589165  188133 pod_ready.go:92] pod "coredns-5cfdc65f69-9qnjq" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:00.589192  188133 pod_ready.go:81] duration metric: took 3.00606369s for pod "coredns-5cfdc65f69-9qnjq" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:00.589204  188133 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-bqgfg" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:02.597316  188133 pod_ready.go:102] pod "coredns-5cfdc65f69-bqgfg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:05.096168  188133 pod_ready.go:102] pod "coredns-5cfdc65f69-bqgfg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:05.597832  188133 pod_ready.go:92] pod "coredns-5cfdc65f69-bqgfg" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.597857  188133 pod_ready.go:81] duration metric: took 5.008646335s for pod "coredns-5cfdc65f69-bqgfg" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.597866  188133 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.603105  188133 pod_ready.go:92] pod "etcd-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.603128  188133 pod_ready.go:81] duration metric: took 5.254251ms for pod "etcd-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.603140  188133 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.610748  188133 pod_ready.go:92] pod "kube-apiserver-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.610771  188133 pod_ready.go:81] duration metric: took 7.623438ms for pod "kube-apiserver-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.610782  188133 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.615949  188133 pod_ready.go:92] pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.615966  188133 pod_ready.go:81] duration metric: took 5.176213ms for pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.615975  188133 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b4h2z" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.620431  188133 pod_ready.go:92] pod "kube-proxy-b4h2z" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.620450  188133 pod_ready.go:81] duration metric: took 4.469258ms for pod "kube-proxy-b4h2z" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.620458  188133 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.993080  188133 pod_ready.go:92] pod "kube-scheduler-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.993104  188133 pod_ready.go:81] duration metric: took 372.640001ms for pod "kube-scheduler-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.993112  188133 pod_ready.go:38] duration metric: took 8.498733061s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:04:05.993125  188133 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:04:05.993186  188133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:04:06.009952  188133 api_server.go:72] duration metric: took 8.824325154s to wait for apiserver process to appear ...
	I0731 21:04:06.009981  188133 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:04:06.010001  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 21:04:06.014715  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 200:
	ok
	I0731 21:04:06.015917  188133 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 21:04:06.015944  188133 api_server.go:131] duration metric: took 5.952931ms to wait for apiserver health ...
	I0731 21:04:06.015954  188133 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:04:06.196874  188133 system_pods.go:59] 9 kube-system pods found
	I0731 21:04:06.196907  188133 system_pods.go:61] "coredns-5cfdc65f69-9qnjq" [2350f15d-0e3d-429f-a21f-8cbd41407d7e] Running
	I0731 21:04:06.196914  188133 system_pods.go:61] "coredns-5cfdc65f69-bqgfg" [9010990b-36d5-4c0d-adc9-5d9483bd5d44] Running
	I0731 21:04:06.196918  188133 system_pods.go:61] "etcd-no-preload-916885" [951e730b-b153-4f75-9f7f-82d774e01853] Running
	I0731 21:04:06.196923  188133 system_pods.go:61] "kube-apiserver-no-preload-916885" [c53d3e94-2b2d-4ad5-a0a2-54c519a4c907] Running
	I0731 21:04:06.196929  188133 system_pods.go:61] "kube-controller-manager-no-preload-916885" [8de7eaf4-d6e7-41dc-a206-645821682ab2] Running
	I0731 21:04:06.196933  188133 system_pods.go:61] "kube-proxy-b4h2z" [328ebd98-accf-43da-ae60-40fc93f34116] Running
	I0731 21:04:06.196938  188133 system_pods.go:61] "kube-scheduler-no-preload-916885" [e6d18e4c-8e0d-4332-8fc3-2696261447ac] Running
	I0731 21:04:06.196945  188133 system_pods.go:61] "metrics-server-78fcd8795b-86m8h" [3c4df12a-3d52-48dc-9998-587565d13dca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:04:06.196950  188133 system_pods.go:61] "storage-provisioner" [6bfc781b-1370-4460-8018-a1279e37b39d] Running
	I0731 21:04:06.196960  188133 system_pods.go:74] duration metric: took 180.999269ms to wait for pod list to return data ...
	I0731 21:04:06.196970  188133 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:04:06.394499  188133 default_sa.go:45] found service account: "default"
	I0731 21:04:06.394530  188133 default_sa.go:55] duration metric: took 197.552628ms for default service account to be created ...
	I0731 21:04:06.394539  188133 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:04:06.598314  188133 system_pods.go:86] 9 kube-system pods found
	I0731 21:04:06.598345  188133 system_pods.go:89] "coredns-5cfdc65f69-9qnjq" [2350f15d-0e3d-429f-a21f-8cbd41407d7e] Running
	I0731 21:04:06.598354  188133 system_pods.go:89] "coredns-5cfdc65f69-bqgfg" [9010990b-36d5-4c0d-adc9-5d9483bd5d44] Running
	I0731 21:04:06.598361  188133 system_pods.go:89] "etcd-no-preload-916885" [951e730b-b153-4f75-9f7f-82d774e01853] Running
	I0731 21:04:06.598370  188133 system_pods.go:89] "kube-apiserver-no-preload-916885" [c53d3e94-2b2d-4ad5-a0a2-54c519a4c907] Running
	I0731 21:04:06.598376  188133 system_pods.go:89] "kube-controller-manager-no-preload-916885" [8de7eaf4-d6e7-41dc-a206-645821682ab2] Running
	I0731 21:04:06.598389  188133 system_pods.go:89] "kube-proxy-b4h2z" [328ebd98-accf-43da-ae60-40fc93f34116] Running
	I0731 21:04:06.598397  188133 system_pods.go:89] "kube-scheduler-no-preload-916885" [e6d18e4c-8e0d-4332-8fc3-2696261447ac] Running
	I0731 21:04:06.598408  188133 system_pods.go:89] "metrics-server-78fcd8795b-86m8h" [3c4df12a-3d52-48dc-9998-587565d13dca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:04:06.598419  188133 system_pods.go:89] "storage-provisioner" [6bfc781b-1370-4460-8018-a1279e37b39d] Running
	I0731 21:04:06.598430  188133 system_pods.go:126] duration metric: took 203.884264ms to wait for k8s-apps to be running ...
	I0731 21:04:06.598442  188133 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:04:06.598498  188133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:04:06.613642  188133 system_svc.go:56] duration metric: took 15.190132ms WaitForService to wait for kubelet
	I0731 21:04:06.613675  188133 kubeadm.go:582] duration metric: took 9.4280531s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:04:06.613705  188133 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:04:06.794163  188133 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:04:06.794191  188133 node_conditions.go:123] node cpu capacity is 2
	I0731 21:04:06.794204  188133 node_conditions.go:105] duration metric: took 180.492992ms to run NodePressure ...
	I0731 21:04:06.794218  188133 start.go:241] waiting for startup goroutines ...
	I0731 21:04:06.794227  188133 start.go:246] waiting for cluster config update ...
	I0731 21:04:06.794239  188133 start.go:255] writing updated cluster config ...
	I0731 21:04:06.794547  188133 ssh_runner.go:195] Run: rm -f paused
	I0731 21:04:06.844118  188133 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0731 21:04:06.846234  188133 out.go:177] * Done! kubectl is now configured to use "no-preload-916885" cluster and "default" namespace by default
	I0731 21:04:04.015079  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:06.514907  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:08.514958  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:11.014341  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:13.514956  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:14.014985  187862 pod_ready.go:81] duration metric: took 4m0.007784922s for pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace to be "Ready" ...
	E0731 21:04:14.015013  187862 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0731 21:04:14.015020  187862 pod_ready.go:38] duration metric: took 4m6.056814749s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:04:14.015034  187862 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:04:14.015079  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:04:14.015127  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:04:14.086254  187862 cri.go:89] found id: "dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:14.086283  187862 cri.go:89] found id: ""
	I0731 21:04:14.086293  187862 logs.go:276] 1 containers: [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473]
	I0731 21:04:14.086368  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.091267  187862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:04:14.091334  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:04:14.138577  187862 cri.go:89] found id: "7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:14.138613  187862 cri.go:89] found id: ""
	I0731 21:04:14.138624  187862 logs.go:276] 1 containers: [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e]
	I0731 21:04:14.138696  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.143245  187862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:04:14.143315  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:04:14.182295  187862 cri.go:89] found id: "1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:14.182325  187862 cri.go:89] found id: ""
	I0731 21:04:14.182336  187862 logs.go:276] 1 containers: [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084]
	I0731 21:04:14.182400  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.186861  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:04:14.186936  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:04:14.230524  187862 cri.go:89] found id: "3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:14.230547  187862 cri.go:89] found id: ""
	I0731 21:04:14.230555  187862 logs.go:276] 1 containers: [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5]
	I0731 21:04:14.230609  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.235285  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:04:14.235354  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:04:14.279188  187862 cri.go:89] found id: "b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:14.279209  187862 cri.go:89] found id: ""
	I0731 21:04:14.279217  187862 logs.go:276] 1 containers: [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845]
	I0731 21:04:14.279268  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.284280  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:04:14.284362  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:04:14.333736  187862 cri.go:89] found id: "0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:14.333764  187862 cri.go:89] found id: ""
	I0731 21:04:14.333774  187862 logs.go:276] 1 containers: [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f]
	I0731 21:04:14.333830  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.338652  187862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:04:14.338717  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:04:14.380632  187862 cri.go:89] found id: ""
	I0731 21:04:14.380663  187862 logs.go:276] 0 containers: []
	W0731 21:04:14.380672  187862 logs.go:278] No container was found matching "kindnet"
	I0731 21:04:14.380678  187862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:04:14.380747  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:04:14.424705  187862 cri.go:89] found id: "919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:14.424727  187862 cri.go:89] found id: "c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:14.424732  187862 cri.go:89] found id: ""
	I0731 21:04:14.424741  187862 logs.go:276] 2 containers: [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2]
	I0731 21:04:14.424801  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.429310  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.434243  187862 logs.go:123] Gathering logs for kubelet ...
	I0731 21:04:14.434267  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:04:14.490743  187862 logs.go:123] Gathering logs for etcd [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e] ...
	I0731 21:04:14.490782  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:14.536575  187862 logs.go:123] Gathering logs for coredns [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084] ...
	I0731 21:04:14.536613  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:14.585952  187862 logs.go:123] Gathering logs for kube-proxy [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845] ...
	I0731 21:04:14.585986  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:14.626198  187862 logs.go:123] Gathering logs for container status ...
	I0731 21:04:14.626228  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:04:14.672674  187862 logs.go:123] Gathering logs for storage-provisioner [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb] ...
	I0731 21:04:14.672712  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:14.711759  187862 logs.go:123] Gathering logs for storage-provisioner [c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2] ...
	I0731 21:04:14.711788  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:14.757020  187862 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:04:14.757047  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:04:15.286344  187862 logs.go:123] Gathering logs for dmesg ...
	I0731 21:04:15.286393  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:04:15.301933  187862 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:04:15.301969  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:04:15.451532  187862 logs.go:123] Gathering logs for kube-apiserver [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473] ...
	I0731 21:04:15.451566  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:15.502398  187862 logs.go:123] Gathering logs for kube-scheduler [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5] ...
	I0731 21:04:15.502443  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:15.544678  187862 logs.go:123] Gathering logs for kube-controller-manager [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f] ...
	I0731 21:04:15.544719  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:17.729291  188656 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 21:04:17.730290  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:04:17.730512  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:04:18.104050  187862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:04:18.121028  187862 api_server.go:72] duration metric: took 4m17.382743031s to wait for apiserver process to appear ...
	I0731 21:04:18.121061  187862 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:04:18.121109  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:04:18.121179  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:04:18.165472  187862 cri.go:89] found id: "dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:18.165498  187862 cri.go:89] found id: ""
	I0731 21:04:18.165507  187862 logs.go:276] 1 containers: [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473]
	I0731 21:04:18.165559  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.169592  187862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:04:18.169663  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:04:18.216918  187862 cri.go:89] found id: "7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:18.216942  187862 cri.go:89] found id: ""
	I0731 21:04:18.216951  187862 logs.go:276] 1 containers: [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e]
	I0731 21:04:18.217015  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.221467  187862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:04:18.221546  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:04:18.267066  187862 cri.go:89] found id: "1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:18.267089  187862 cri.go:89] found id: ""
	I0731 21:04:18.267098  187862 logs.go:276] 1 containers: [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084]
	I0731 21:04:18.267164  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.271583  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:04:18.271662  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:04:18.316381  187862 cri.go:89] found id: "3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:18.316404  187862 cri.go:89] found id: ""
	I0731 21:04:18.316412  187862 logs.go:276] 1 containers: [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5]
	I0731 21:04:18.316472  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.320859  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:04:18.320932  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:04:18.365366  187862 cri.go:89] found id: "b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:18.365396  187862 cri.go:89] found id: ""
	I0731 21:04:18.365410  187862 logs.go:276] 1 containers: [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845]
	I0731 21:04:18.365476  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.369933  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:04:18.370019  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:04:18.411121  187862 cri.go:89] found id: "0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:18.411143  187862 cri.go:89] found id: ""
	I0731 21:04:18.411152  187862 logs.go:276] 1 containers: [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f]
	I0731 21:04:18.411203  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.415493  187862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:04:18.415561  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:04:18.453040  187862 cri.go:89] found id: ""
	I0731 21:04:18.453069  187862 logs.go:276] 0 containers: []
	W0731 21:04:18.453078  187862 logs.go:278] No container was found matching "kindnet"
	I0731 21:04:18.453085  187862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:04:18.453153  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:04:18.499335  187862 cri.go:89] found id: "919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:18.499359  187862 cri.go:89] found id: "c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:18.499363  187862 cri.go:89] found id: ""
	I0731 21:04:18.499371  187862 logs.go:276] 2 containers: [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2]
	I0731 21:04:18.499446  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.504353  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.508619  187862 logs.go:123] Gathering logs for kubelet ...
	I0731 21:04:18.508640  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:04:18.562692  187862 logs.go:123] Gathering logs for kube-apiserver [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473] ...
	I0731 21:04:18.562732  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:18.623405  187862 logs.go:123] Gathering logs for coredns [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084] ...
	I0731 21:04:18.623446  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:18.679472  187862 logs.go:123] Gathering logs for kube-scheduler [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5] ...
	I0731 21:04:18.679510  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:18.728893  187862 logs.go:123] Gathering logs for storage-provisioner [c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2] ...
	I0731 21:04:18.728933  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:18.770963  187862 logs.go:123] Gathering logs for container status ...
	I0731 21:04:18.770994  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:04:18.819353  187862 logs.go:123] Gathering logs for dmesg ...
	I0731 21:04:18.819385  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:04:18.835654  187862 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:04:18.835684  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:04:18.947479  187862 logs.go:123] Gathering logs for etcd [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e] ...
	I0731 21:04:18.947516  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:18.995005  187862 logs.go:123] Gathering logs for kube-proxy [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845] ...
	I0731 21:04:18.995043  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:19.033246  187862 logs.go:123] Gathering logs for kube-controller-manager [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f] ...
	I0731 21:04:19.033274  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:19.092703  187862 logs.go:123] Gathering logs for storage-provisioner [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb] ...
	I0731 21:04:19.092740  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:19.129738  187862 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:04:19.129769  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:04:22.058935  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 21:04:22.063496  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 200:
	ok
	I0731 21:04:22.064670  187862 api_server.go:141] control plane version: v1.30.3
	I0731 21:04:22.064690  187862 api_server.go:131] duration metric: took 3.943623055s to wait for apiserver health ...
	I0731 21:04:22.064699  187862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:04:22.064721  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:04:22.064771  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:04:22.103710  187862 cri.go:89] found id: "dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:22.103733  187862 cri.go:89] found id: ""
	I0731 21:04:22.103741  187862 logs.go:276] 1 containers: [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473]
	I0731 21:04:22.103798  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.108133  187862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:04:22.108203  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:04:22.159120  187862 cri.go:89] found id: "7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:22.159145  187862 cri.go:89] found id: ""
	I0731 21:04:22.159155  187862 logs.go:276] 1 containers: [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e]
	I0731 21:04:22.159213  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.165107  187862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:04:22.165169  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:04:22.202426  187862 cri.go:89] found id: "1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:22.202454  187862 cri.go:89] found id: ""
	I0731 21:04:22.202464  187862 logs.go:276] 1 containers: [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084]
	I0731 21:04:22.202524  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.206785  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:04:22.206842  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:04:22.245008  187862 cri.go:89] found id: "3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:22.245039  187862 cri.go:89] found id: ""
	I0731 21:04:22.245050  187862 logs.go:276] 1 containers: [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5]
	I0731 21:04:22.245111  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.249467  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:04:22.249548  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:04:22.731353  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:04:22.731627  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:04:22.298105  187862 cri.go:89] found id: "b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:22.298135  187862 cri.go:89] found id: ""
	I0731 21:04:22.298145  187862 logs.go:276] 1 containers: [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845]
	I0731 21:04:22.298209  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.302845  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:04:22.302902  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:04:22.346868  187862 cri.go:89] found id: "0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:22.346898  187862 cri.go:89] found id: ""
	I0731 21:04:22.346909  187862 logs.go:276] 1 containers: [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f]
	I0731 21:04:22.346978  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.351246  187862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:04:22.351313  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:04:22.389698  187862 cri.go:89] found id: ""
	I0731 21:04:22.389730  187862 logs.go:276] 0 containers: []
	W0731 21:04:22.389742  187862 logs.go:278] No container was found matching "kindnet"
	I0731 21:04:22.389751  187862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:04:22.389817  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:04:22.425212  187862 cri.go:89] found id: "919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:22.425234  187862 cri.go:89] found id: "c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:22.425238  187862 cri.go:89] found id: ""
	I0731 21:04:22.425245  187862 logs.go:276] 2 containers: [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2]
	I0731 21:04:22.425298  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.429584  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.433471  187862 logs.go:123] Gathering logs for kube-controller-manager [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f] ...
	I0731 21:04:22.433496  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:22.490354  187862 logs.go:123] Gathering logs for storage-provisioner [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb] ...
	I0731 21:04:22.490390  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:22.530117  187862 logs.go:123] Gathering logs for dmesg ...
	I0731 21:04:22.530146  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:04:22.545249  187862 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:04:22.545281  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:04:22.658074  187862 logs.go:123] Gathering logs for kube-apiserver [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473] ...
	I0731 21:04:22.658115  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:22.711537  187862 logs.go:123] Gathering logs for etcd [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e] ...
	I0731 21:04:22.711573  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:22.758644  187862 logs.go:123] Gathering logs for coredns [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084] ...
	I0731 21:04:22.758685  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:22.796716  187862 logs.go:123] Gathering logs for kube-scheduler [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5] ...
	I0731 21:04:22.796751  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:22.843502  187862 logs.go:123] Gathering logs for storage-provisioner [c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2] ...
	I0731 21:04:22.843538  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:22.881738  187862 logs.go:123] Gathering logs for kubelet ...
	I0731 21:04:22.881765  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:04:22.936317  187862 logs.go:123] Gathering logs for kube-proxy [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845] ...
	I0731 21:04:22.936360  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:22.977562  187862 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:04:22.977592  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:04:23.354873  187862 logs.go:123] Gathering logs for container status ...
	I0731 21:04:23.354921  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:04:25.917553  187862 system_pods.go:59] 8 kube-system pods found
	I0731 21:04:25.917588  187862 system_pods.go:61] "coredns-7db6d8ff4d-2ks55" [f5ad9d76-5cdc-430e-8933-7e72a2dda95f] Running
	I0731 21:04:25.917593  187862 system_pods.go:61] "etcd-embed-certs-831240" [5236ad06-90d9-48f1-964a-efa8f56ee8b5] Running
	I0731 21:04:25.917597  187862 system_pods.go:61] "kube-apiserver-embed-certs-831240" [06290f48-0a7b-4d88-9e61-af4c7dde9acd] Running
	I0731 21:04:25.917601  187862 system_pods.go:61] "kube-controller-manager-embed-certs-831240" [5d669fab-16cd-4bc6-a880-a1ecfb5d55b2] Running
	I0731 21:04:25.917604  187862 system_pods.go:61] "kube-proxy-x662j" [9ad0d8a8-94b4-4f3e-b5da-4e5585c28d21] Running
	I0731 21:04:25.917608  187862 system_pods.go:61] "kube-scheduler-embed-certs-831240" [cf9c5922-6468-46e7-84c1-1841f5bc3446] Running
	I0731 21:04:25.917614  187862 system_pods.go:61] "metrics-server-569cc877fc-slbkm" [f93f674b-1f0e-443b-ac06-9c2a5234eeea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:04:25.917624  187862 system_pods.go:61] "storage-provisioner" [d3d5fa24-96e8-4ab5-9887-62ff8b82f21d] Running
	I0731 21:04:25.917635  187862 system_pods.go:74] duration metric: took 3.852929636s to wait for pod list to return data ...
	I0731 21:04:25.917649  187862 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:04:25.920234  187862 default_sa.go:45] found service account: "default"
	I0731 21:04:25.920256  187862 default_sa.go:55] duration metric: took 2.600194ms for default service account to be created ...
	I0731 21:04:25.920264  187862 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:04:25.926296  187862 system_pods.go:86] 8 kube-system pods found
	I0731 21:04:25.926325  187862 system_pods.go:89] "coredns-7db6d8ff4d-2ks55" [f5ad9d76-5cdc-430e-8933-7e72a2dda95f] Running
	I0731 21:04:25.926330  187862 system_pods.go:89] "etcd-embed-certs-831240" [5236ad06-90d9-48f1-964a-efa8f56ee8b5] Running
	I0731 21:04:25.926334  187862 system_pods.go:89] "kube-apiserver-embed-certs-831240" [06290f48-0a7b-4d88-9e61-af4c7dde9acd] Running
	I0731 21:04:25.926338  187862 system_pods.go:89] "kube-controller-manager-embed-certs-831240" [5d669fab-16cd-4bc6-a880-a1ecfb5d55b2] Running
	I0731 21:04:25.926342  187862 system_pods.go:89] "kube-proxy-x662j" [9ad0d8a8-94b4-4f3e-b5da-4e5585c28d21] Running
	I0731 21:04:25.926346  187862 system_pods.go:89] "kube-scheduler-embed-certs-831240" [cf9c5922-6468-46e7-84c1-1841f5bc3446] Running
	I0731 21:04:25.926352  187862 system_pods.go:89] "metrics-server-569cc877fc-slbkm" [f93f674b-1f0e-443b-ac06-9c2a5234eeea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:04:25.926356  187862 system_pods.go:89] "storage-provisioner" [d3d5fa24-96e8-4ab5-9887-62ff8b82f21d] Running
	I0731 21:04:25.926365  187862 system_pods.go:126] duration metric: took 6.094538ms to wait for k8s-apps to be running ...
	I0731 21:04:25.926373  187862 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:04:25.926433  187862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:04:25.945225  187862 system_svc.go:56] duration metric: took 18.837835ms WaitForService to wait for kubelet
	I0731 21:04:25.945264  187862 kubeadm.go:582] duration metric: took 4m25.206984451s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:04:25.945294  187862 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:04:25.948480  187862 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:04:25.948506  187862 node_conditions.go:123] node cpu capacity is 2
	I0731 21:04:25.948520  187862 node_conditions.go:105] duration metric: took 3.219175ms to run NodePressure ...
	I0731 21:04:25.948535  187862 start.go:241] waiting for startup goroutines ...
	I0731 21:04:25.948543  187862 start.go:246] waiting for cluster config update ...
	I0731 21:04:25.948556  187862 start.go:255] writing updated cluster config ...
	I0731 21:04:25.949317  187862 ssh_runner.go:195] Run: rm -f paused
	I0731 21:04:26.000525  187862 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 21:04:26.002719  187862 out.go:177] * Done! kubectl is now configured to use "embed-certs-831240" cluster and "default" namespace by default
	I0731 21:04:32.732572  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:04:32.732835  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:04:52.734257  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:04:52.734530  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:05:32.739465  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:05:32.739778  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:05:32.739796  188656 kubeadm.go:310] 
	I0731 21:05:32.739854  188656 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 21:05:32.739962  188656 kubeadm.go:310] 		timed out waiting for the condition
	I0731 21:05:32.739988  188656 kubeadm.go:310] 
	I0731 21:05:32.740034  188656 kubeadm.go:310] 	This error is likely caused by:
	I0731 21:05:32.740083  188656 kubeadm.go:310] 		- The kubelet is not running
	I0731 21:05:32.740230  188656 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 21:05:32.740245  188656 kubeadm.go:310] 
	I0731 21:05:32.740393  188656 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 21:05:32.740441  188656 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 21:05:32.740485  188656 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 21:05:32.740494  188656 kubeadm.go:310] 
	I0731 21:05:32.740624  188656 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 21:05:32.740741  188656 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 21:05:32.740752  188656 kubeadm.go:310] 
	I0731 21:05:32.740888  188656 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 21:05:32.741008  188656 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 21:05:32.741084  188656 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 21:05:32.741145  188656 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 21:05:32.741152  188656 kubeadm.go:310] 
	I0731 21:05:32.741834  188656 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:05:32.741967  188656 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 21:05:32.742066  188656 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0731 21:05:32.742264  188656 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0731 21:05:32.742340  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:05:33.227380  188656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:05:33.243864  188656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:05:33.254208  188656 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:05:33.254234  188656 kubeadm.go:157] found existing configuration files:
	
	I0731 21:05:33.254313  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:05:33.264766  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:05:33.264846  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:05:33.275517  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:05:33.286281  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:05:33.286358  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:05:33.297108  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:05:33.307555  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:05:33.307627  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:05:33.318193  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:05:33.328155  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:05:33.328220  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:05:33.338088  188656 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:05:33.569897  188656 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:07:29.725230  188656 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 21:07:29.725381  188656 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 21:07:29.726868  188656 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 21:07:29.726959  188656 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:07:29.727064  188656 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:07:29.727204  188656 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:07:29.727322  188656 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:07:29.727389  188656 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:07:29.729525  188656 out.go:204]   - Generating certificates and keys ...
	I0731 21:07:29.729659  188656 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:07:29.729761  188656 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:07:29.729918  188656 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:07:29.730026  188656 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:07:29.730126  188656 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:07:29.730268  188656 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:07:29.730369  188656 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:07:29.730461  188656 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:07:29.730555  188656 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:07:29.730658  188656 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:07:29.730713  188656 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:07:29.730790  188656 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:07:29.730856  188656 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:07:29.730931  188656 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:07:29.731014  188656 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:07:29.731111  188656 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:07:29.731248  188656 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:07:29.731339  188656 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:07:29.731395  188656 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:07:29.731486  188656 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:07:29.733052  188656 out.go:204]   - Booting up control plane ...
	I0731 21:07:29.733146  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:07:29.733226  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:07:29.733305  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:07:29.733454  188656 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:07:29.733656  188656 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 21:07:29.733735  188656 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 21:07:29.733830  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.734048  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.734116  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.734275  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.734331  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.734543  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.734642  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.734868  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.734966  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.735234  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.735252  188656 kubeadm.go:310] 
	I0731 21:07:29.735313  188656 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 21:07:29.735376  188656 kubeadm.go:310] 		timed out waiting for the condition
	I0731 21:07:29.735385  188656 kubeadm.go:310] 
	I0731 21:07:29.735432  188656 kubeadm.go:310] 	This error is likely caused by:
	I0731 21:07:29.735480  188656 kubeadm.go:310] 		- The kubelet is not running
	I0731 21:07:29.735624  188656 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 21:07:29.735634  188656 kubeadm.go:310] 
	I0731 21:07:29.735779  188656 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 21:07:29.735830  188656 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 21:07:29.735879  188656 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 21:07:29.735889  188656 kubeadm.go:310] 
	I0731 21:07:29.736038  188656 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 21:07:29.736129  188656 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 21:07:29.736141  188656 kubeadm.go:310] 
	I0731 21:07:29.736241  188656 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 21:07:29.736315  188656 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 21:07:29.736400  188656 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 21:07:29.736480  188656 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 21:07:29.736537  188656 kubeadm.go:310] 
	I0731 21:07:29.736579  188656 kubeadm.go:394] duration metric: took 7m58.053099483s to StartCluster
	I0731 21:07:29.736660  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:07:29.736793  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:07:29.802897  188656 cri.go:89] found id: ""
	I0731 21:07:29.802932  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.802945  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:07:29.802953  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:07:29.803021  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:07:29.840059  188656 cri.go:89] found id: ""
	I0731 21:07:29.840088  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.840098  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:07:29.840106  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:07:29.840178  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:07:29.881030  188656 cri.go:89] found id: ""
	I0731 21:07:29.881058  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.881066  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:07:29.881073  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:07:29.881150  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:07:29.923495  188656 cri.go:89] found id: ""
	I0731 21:07:29.923524  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.923532  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:07:29.923538  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:07:29.923604  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:07:29.966128  188656 cri.go:89] found id: ""
	I0731 21:07:29.966156  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.966164  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:07:29.966171  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:07:29.966236  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:07:30.007648  188656 cri.go:89] found id: ""
	I0731 21:07:30.007678  188656 logs.go:276] 0 containers: []
	W0731 21:07:30.007687  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:07:30.007693  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:07:30.007748  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:07:30.047857  188656 cri.go:89] found id: ""
	I0731 21:07:30.047887  188656 logs.go:276] 0 containers: []
	W0731 21:07:30.047903  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:07:30.047909  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:07:30.047959  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:07:30.087245  188656 cri.go:89] found id: ""
	I0731 21:07:30.087275  188656 logs.go:276] 0 containers: []
	W0731 21:07:30.087283  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:07:30.087294  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:07:30.087308  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:07:30.168205  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:07:30.168235  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:07:30.168256  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:07:30.276908  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:07:30.276951  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:07:30.322993  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:07:30.323030  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:07:30.375237  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:07:30.375287  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0731 21:07:30.392523  188656 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0731 21:07:30.392579  188656 out.go:239] * 
	W0731 21:07:30.392653  188656 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:07:30.392683  188656 out.go:239] * 
	W0731 21:07:30.393845  188656 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 21:07:30.397498  188656 out.go:177] 
	W0731 21:07:30.398890  188656 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:07:30.398959  188656 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0731 21:07:30.398995  188656 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0731 21:07:30.401295  188656 out.go:177] 
	
	
	==> CRI-O <==
	Jul 31 21:13:08 no-preload-916885 crio[724]: time="2024-07-31 21:13:08.976178277Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460388976153977,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd83f6dd-24b0-4833-b388-cd0e09702602 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:13:08 no-preload-916885 crio[724]: time="2024-07-31 21:13:08.976818061Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a4eb7fd6-5f1b-412e-8900-b87f2c7e4311 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:13:08 no-preload-916885 crio[724]: time="2024-07-31 21:13:08.976872472Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a4eb7fd6-5f1b-412e-8900-b87f2c7e4311 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:13:08 no-preload-916885 crio[724]: time="2024-07-31 21:13:08.977091585Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f9e2e1cd7228207faa2ff002f8cc03a9d98f4b388fd71a1a40ca232a76f4a0,PodSandboxId:6f50b87f8d38e0669b501d0fe348820500f954a6232648339095dfef3e528fcc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459839624805026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bqgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9010990b-36d5-4c0d-adc9-5d9483bd5d44,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2db39be0da84703a46cef6c35ba354293c592070c9cc009a65504d253aa91b51,PodSandboxId:685e4d79b333b85663a9cf9b0fa403094552066244f0e300fbbbb075aea29b93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459839583366097,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-9qnjq,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2350f15d-0e3d-429f-a21f-8cbd41407d7e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c67c2b844ed877679873aa317c98c7e636bb9c9d0f42ada12a2f9388996b9fec,PodSandboxId:c53427f450026f0d3335d9b48da02391ebc00f95b5c1efe8ade865bac6db4af0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1722459839078526963,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bfc781b-1370-4460-8018-a1279e37b39d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f2f77153e80ecfd09afe39cb5e448010a1de1a1fca2bc54834e055477f5c11,PodSandboxId:616f0efde70f8c7adfb51dd6e6975a6c9ae8b56b6a7e5fa24af54729e5d42a94,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1722459838069264460,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4h2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 328ebd98-accf-43da-ae60-40fc93f34116,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21f0dca410fa197640a5a8e99f24dd152c79027eaa2d252767d3690691a6042,PodSandboxId:5b2fc40540948899d1e8a477dcc86d7a1aba1ed9ec66c35877477f38d707c5f3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722459826550983146,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df9db57d18dc788fa09a42bf2fd340c3,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f5e4d14b86360dbe264196f7a74b632354f25386ad11233eb96c6a134d77959,PodSandboxId:25090de3a63b61aa61459b0e51e8db808fc8a9ef37c479e4d7b0ea913f589128,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722459826548105160,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39a7cebf4279e5eab84727743fe9b711,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c90a9147c6edc05d39c3f78f5d2597a437b773c93a1324f79a2faa7ed03aa9,PodSandboxId:3275ffc9c3ba372d2a5bd99eb803e165221016c4e39eb025a82c9c76b937251e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722459826487201402,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077a0ac9e1a343879e95368a267db6cd,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d924058498dcb3735758da513f70d60e4974b432aa6434aa13593b6ff22d360,PodSandboxId:9cb5caa5f60eec7ff5fcdafb0b064ad85dcc04a9d1b371989e32786d1cdde540,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722459826489063733,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b005f25a852b06b06ff5498175ec2f7,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3169bc1dd903b3e599effab3c233eb4cbd4b31468090864ee6f8909cf2635b0,PodSandboxId:fc5a36a16cf030507bf30af46d5629e2163912b58f508378b5a7f67564e725a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722459539719672412,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39a7cebf4279e5eab84727743fe9b711,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a4eb7fd6-5f1b-412e-8900-b87f2c7e4311 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:13:09 no-preload-916885 crio[724]: time="2024-07-31 21:13:09.019577706Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b032322e-172b-41ec-8641-c2f70bd839c3 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:13:09 no-preload-916885 crio[724]: time="2024-07-31 21:13:09.019647136Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b032322e-172b-41ec-8641-c2f70bd839c3 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:13:09 no-preload-916885 crio[724]: time="2024-07-31 21:13:09.020957275Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8d3eac72-c927-4cfe-a32f-37368258c089 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:13:09 no-preload-916885 crio[724]: time="2024-07-31 21:13:09.021276354Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460389021255278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8d3eac72-c927-4cfe-a32f-37368258c089 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:13:09 no-preload-916885 crio[724]: time="2024-07-31 21:13:09.022020051Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4bcf59bb-6dae-4da9-8592-b913791e3738 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:13:09 no-preload-916885 crio[724]: time="2024-07-31 21:13:09.022070971Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4bcf59bb-6dae-4da9-8592-b913791e3738 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:13:09 no-preload-916885 crio[724]: time="2024-07-31 21:13:09.022264187Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f9e2e1cd7228207faa2ff002f8cc03a9d98f4b388fd71a1a40ca232a76f4a0,PodSandboxId:6f50b87f8d38e0669b501d0fe348820500f954a6232648339095dfef3e528fcc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459839624805026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bqgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9010990b-36d5-4c0d-adc9-5d9483bd5d44,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2db39be0da84703a46cef6c35ba354293c592070c9cc009a65504d253aa91b51,PodSandboxId:685e4d79b333b85663a9cf9b0fa403094552066244f0e300fbbbb075aea29b93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459839583366097,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-9qnjq,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2350f15d-0e3d-429f-a21f-8cbd41407d7e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c67c2b844ed877679873aa317c98c7e636bb9c9d0f42ada12a2f9388996b9fec,PodSandboxId:c53427f450026f0d3335d9b48da02391ebc00f95b5c1efe8ade865bac6db4af0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1722459839078526963,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bfc781b-1370-4460-8018-a1279e37b39d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f2f77153e80ecfd09afe39cb5e448010a1de1a1fca2bc54834e055477f5c11,PodSandboxId:616f0efde70f8c7adfb51dd6e6975a6c9ae8b56b6a7e5fa24af54729e5d42a94,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1722459838069264460,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4h2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 328ebd98-accf-43da-ae60-40fc93f34116,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21f0dca410fa197640a5a8e99f24dd152c79027eaa2d252767d3690691a6042,PodSandboxId:5b2fc40540948899d1e8a477dcc86d7a1aba1ed9ec66c35877477f38d707c5f3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722459826550983146,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df9db57d18dc788fa09a42bf2fd340c3,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f5e4d14b86360dbe264196f7a74b632354f25386ad11233eb96c6a134d77959,PodSandboxId:25090de3a63b61aa61459b0e51e8db808fc8a9ef37c479e4d7b0ea913f589128,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722459826548105160,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39a7cebf4279e5eab84727743fe9b711,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c90a9147c6edc05d39c3f78f5d2597a437b773c93a1324f79a2faa7ed03aa9,PodSandboxId:3275ffc9c3ba372d2a5bd99eb803e165221016c4e39eb025a82c9c76b937251e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722459826487201402,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077a0ac9e1a343879e95368a267db6cd,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d924058498dcb3735758da513f70d60e4974b432aa6434aa13593b6ff22d360,PodSandboxId:9cb5caa5f60eec7ff5fcdafb0b064ad85dcc04a9d1b371989e32786d1cdde540,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722459826489063733,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b005f25a852b06b06ff5498175ec2f7,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3169bc1dd903b3e599effab3c233eb4cbd4b31468090864ee6f8909cf2635b0,PodSandboxId:fc5a36a16cf030507bf30af46d5629e2163912b58f508378b5a7f67564e725a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722459539719672412,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39a7cebf4279e5eab84727743fe9b711,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4bcf59bb-6dae-4da9-8592-b913791e3738 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:13:09 no-preload-916885 crio[724]: time="2024-07-31 21:13:09.068791391Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a7009375-9994-454b-b90b-5e3505972352 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:13:09 no-preload-916885 crio[724]: time="2024-07-31 21:13:09.068863616Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a7009375-9994-454b-b90b-5e3505972352 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:13:09 no-preload-916885 crio[724]: time="2024-07-31 21:13:09.070312459Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=173d9806-e3d5-4bdc-9e79-1c010dc172d8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:13:09 no-preload-916885 crio[724]: time="2024-07-31 21:13:09.070719280Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460389070697996,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=173d9806-e3d5-4bdc-9e79-1c010dc172d8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:13:09 no-preload-916885 crio[724]: time="2024-07-31 21:13:09.071352776Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5e63aae9-9188-4c47-b1f4-9fe66599b3a9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:13:09 no-preload-916885 crio[724]: time="2024-07-31 21:13:09.071410312Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5e63aae9-9188-4c47-b1f4-9fe66599b3a9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:13:09 no-preload-916885 crio[724]: time="2024-07-31 21:13:09.071694890Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f9e2e1cd7228207faa2ff002f8cc03a9d98f4b388fd71a1a40ca232a76f4a0,PodSandboxId:6f50b87f8d38e0669b501d0fe348820500f954a6232648339095dfef3e528fcc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459839624805026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bqgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9010990b-36d5-4c0d-adc9-5d9483bd5d44,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2db39be0da84703a46cef6c35ba354293c592070c9cc009a65504d253aa91b51,PodSandboxId:685e4d79b333b85663a9cf9b0fa403094552066244f0e300fbbbb075aea29b93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459839583366097,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-9qnjq,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2350f15d-0e3d-429f-a21f-8cbd41407d7e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c67c2b844ed877679873aa317c98c7e636bb9c9d0f42ada12a2f9388996b9fec,PodSandboxId:c53427f450026f0d3335d9b48da02391ebc00f95b5c1efe8ade865bac6db4af0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1722459839078526963,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bfc781b-1370-4460-8018-a1279e37b39d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f2f77153e80ecfd09afe39cb5e448010a1de1a1fca2bc54834e055477f5c11,PodSandboxId:616f0efde70f8c7adfb51dd6e6975a6c9ae8b56b6a7e5fa24af54729e5d42a94,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1722459838069264460,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4h2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 328ebd98-accf-43da-ae60-40fc93f34116,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21f0dca410fa197640a5a8e99f24dd152c79027eaa2d252767d3690691a6042,PodSandboxId:5b2fc40540948899d1e8a477dcc86d7a1aba1ed9ec66c35877477f38d707c5f3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722459826550983146,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df9db57d18dc788fa09a42bf2fd340c3,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f5e4d14b86360dbe264196f7a74b632354f25386ad11233eb96c6a134d77959,PodSandboxId:25090de3a63b61aa61459b0e51e8db808fc8a9ef37c479e4d7b0ea913f589128,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722459826548105160,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39a7cebf4279e5eab84727743fe9b711,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c90a9147c6edc05d39c3f78f5d2597a437b773c93a1324f79a2faa7ed03aa9,PodSandboxId:3275ffc9c3ba372d2a5bd99eb803e165221016c4e39eb025a82c9c76b937251e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722459826487201402,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077a0ac9e1a343879e95368a267db6cd,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d924058498dcb3735758da513f70d60e4974b432aa6434aa13593b6ff22d360,PodSandboxId:9cb5caa5f60eec7ff5fcdafb0b064ad85dcc04a9d1b371989e32786d1cdde540,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722459826489063733,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b005f25a852b06b06ff5498175ec2f7,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3169bc1dd903b3e599effab3c233eb4cbd4b31468090864ee6f8909cf2635b0,PodSandboxId:fc5a36a16cf030507bf30af46d5629e2163912b58f508378b5a7f67564e725a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722459539719672412,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39a7cebf4279e5eab84727743fe9b711,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5e63aae9-9188-4c47-b1f4-9fe66599b3a9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:13:09 no-preload-916885 crio[724]: time="2024-07-31 21:13:09.106311582Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=71a6cb1d-7552-4fff-a8fd-a8fcb13bf896 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:13:09 no-preload-916885 crio[724]: time="2024-07-31 21:13:09.106382225Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=71a6cb1d-7552-4fff-a8fd-a8fcb13bf896 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:13:09 no-preload-916885 crio[724]: time="2024-07-31 21:13:09.107537101Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b425a265-aaf4-45ec-a75a-4f04df9dfc89 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:13:09 no-preload-916885 crio[724]: time="2024-07-31 21:13:09.107877179Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460389107856791,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b425a265-aaf4-45ec-a75a-4f04df9dfc89 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:13:09 no-preload-916885 crio[724]: time="2024-07-31 21:13:09.108541520Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=86fcdd6f-8aaa-412e-ada3-c2569fd6788b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:13:09 no-preload-916885 crio[724]: time="2024-07-31 21:13:09.108590023Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=86fcdd6f-8aaa-412e-ada3-c2569fd6788b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:13:09 no-preload-916885 crio[724]: time="2024-07-31 21:13:09.108817778Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f9e2e1cd7228207faa2ff002f8cc03a9d98f4b388fd71a1a40ca232a76f4a0,PodSandboxId:6f50b87f8d38e0669b501d0fe348820500f954a6232648339095dfef3e528fcc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459839624805026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bqgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9010990b-36d5-4c0d-adc9-5d9483bd5d44,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2db39be0da84703a46cef6c35ba354293c592070c9cc009a65504d253aa91b51,PodSandboxId:685e4d79b333b85663a9cf9b0fa403094552066244f0e300fbbbb075aea29b93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459839583366097,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-9qnjq,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2350f15d-0e3d-429f-a21f-8cbd41407d7e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c67c2b844ed877679873aa317c98c7e636bb9c9d0f42ada12a2f9388996b9fec,PodSandboxId:c53427f450026f0d3335d9b48da02391ebc00f95b5c1efe8ade865bac6db4af0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1722459839078526963,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bfc781b-1370-4460-8018-a1279e37b39d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f2f77153e80ecfd09afe39cb5e448010a1de1a1fca2bc54834e055477f5c11,PodSandboxId:616f0efde70f8c7adfb51dd6e6975a6c9ae8b56b6a7e5fa24af54729e5d42a94,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1722459838069264460,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4h2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 328ebd98-accf-43da-ae60-40fc93f34116,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21f0dca410fa197640a5a8e99f24dd152c79027eaa2d252767d3690691a6042,PodSandboxId:5b2fc40540948899d1e8a477dcc86d7a1aba1ed9ec66c35877477f38d707c5f3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722459826550983146,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df9db57d18dc788fa09a42bf2fd340c3,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f5e4d14b86360dbe264196f7a74b632354f25386ad11233eb96c6a134d77959,PodSandboxId:25090de3a63b61aa61459b0e51e8db808fc8a9ef37c479e4d7b0ea913f589128,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722459826548105160,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39a7cebf4279e5eab84727743fe9b711,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c90a9147c6edc05d39c3f78f5d2597a437b773c93a1324f79a2faa7ed03aa9,PodSandboxId:3275ffc9c3ba372d2a5bd99eb803e165221016c4e39eb025a82c9c76b937251e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722459826487201402,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077a0ac9e1a343879e95368a267db6cd,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d924058498dcb3735758da513f70d60e4974b432aa6434aa13593b6ff22d360,PodSandboxId:9cb5caa5f60eec7ff5fcdafb0b064ad85dcc04a9d1b371989e32786d1cdde540,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722459826489063733,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b005f25a852b06b06ff5498175ec2f7,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3169bc1dd903b3e599effab3c233eb4cbd4b31468090864ee6f8909cf2635b0,PodSandboxId:fc5a36a16cf030507bf30af46d5629e2163912b58f508378b5a7f67564e725a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722459539719672412,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39a7cebf4279e5eab84727743fe9b711,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=86fcdd6f-8aaa-412e-ada3-c2569fd6788b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	70f9e2e1cd722       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   6f50b87f8d38e       coredns-5cfdc65f69-bqgfg
	2db39be0da847       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   685e4d79b333b       coredns-5cfdc65f69-9qnjq
	c67c2b844ed87       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   c53427f450026       storage-provisioner
	55f2f77153e80       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   9 minutes ago       Running             kube-proxy                0                   616f0efde70f8       kube-proxy-b4h2z
	d21f0dca410fa       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   9 minutes ago       Running             etcd                      2                   5b2fc40540948       etcd-no-preload-916885
	9f5e4d14b8636       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   9 minutes ago       Running             kube-apiserver            2                   25090de3a63b6       kube-apiserver-no-preload-916885
	7d924058498dc       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   9 minutes ago       Running             kube-scheduler            2                   9cb5caa5f60ee       kube-scheduler-no-preload-916885
	01c90a9147c6e       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   9 minutes ago       Running             kube-controller-manager   2                   3275ffc9c3ba3       kube-controller-manager-no-preload-916885
	a3169bc1dd903       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   14 minutes ago      Exited              kube-apiserver            1                   fc5a36a16cf03       kube-apiserver-no-preload-916885
	
	
	==> coredns [2db39be0da84703a46cef6c35ba354293c592070c9cc009a65504d253aa91b51] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [70f9e2e1cd7228207faa2ff002f8cc03a9d98f4b388fd71a1a40ca232a76f4a0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-916885
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-916885
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825
	                    minikube.k8s.io/name=no-preload-916885
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T21_03_53_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 21:03:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-916885
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 21:13:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 21:09:10 +0000   Wed, 31 Jul 2024 21:03:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 21:09:10 +0000   Wed, 31 Jul 2024 21:03:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 21:09:10 +0000   Wed, 31 Jul 2024 21:03:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 21:09:10 +0000   Wed, 31 Jul 2024 21:03:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.239
	  Hostname:    no-preload-916885
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8c9e297d34d5406dae42fd7877c69eaf
	  System UUID:                8c9e297d-34d5-406d-ae42-fd7877c69eaf
	  Boot ID:                    80b9904a-fd63-485d-85db-7980941c521e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-9qnjq                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m12s
	  kube-system                 coredns-5cfdc65f69-bqgfg                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m12s
	  kube-system                 etcd-no-preload-916885                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-apiserver-no-preload-916885             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-controller-manager-no-preload-916885    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-proxy-b4h2z                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m12s
	  kube-system                 kube-scheduler-no-preload-916885             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 metrics-server-78fcd8795b-86m8h              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m11s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m10s                  kube-proxy       
	  Normal  Starting                 9m24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m24s (x6 over 9m24s)  kubelet          Node no-preload-916885 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m24s (x5 over 9m24s)  kubelet          Node no-preload-916885 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m24s (x5 over 9m24s)  kubelet          Node no-preload-916885 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m17s                  kubelet          Node no-preload-916885 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s                  kubelet          Node no-preload-916885 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s                  kubelet          Node no-preload-916885 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m12s                  node-controller  Node no-preload-916885 event: Registered Node no-preload-916885 in Controller
	
	
	==> dmesg <==
	[  +0.050800] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039506] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.742462] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.540903] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.370471] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.695858] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.060259] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053881] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.156929] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.172507] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.302205] systemd-fstab-generator[708]: Ignoring "noauto" option for root device
	[ +14.856787] systemd-fstab-generator[1179]: Ignoring "noauto" option for root device
	[  +0.060271] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.776849] systemd-fstab-generator[1299]: Ignoring "noauto" option for root device
	[Jul31 20:59] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.439040] kauditd_printk_skb: 88 callbacks suppressed
	[Jul31 21:03] kauditd_printk_skb: 4 callbacks suppressed
	[  +1.015409] systemd-fstab-generator[2967]: Ignoring "noauto" option for root device
	[  +4.598244] kauditd_printk_skb: 56 callbacks suppressed
	[  +2.454665] systemd-fstab-generator[3290]: Ignoring "noauto" option for root device
	[  +4.933013] systemd-fstab-generator[3401]: Ignoring "noauto" option for root device
	[  +0.095244] kauditd_printk_skb: 14 callbacks suppressed
	[Jul31 21:04] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [d21f0dca410fa197640a5a8e99f24dd152c79027eaa2d252767d3690691a6042] <==
	{"level":"info","ts":"2024-07-31T21:03:47.081954Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T21:03:47.082182Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.72.239:2380"}
	{"level":"info","ts":"2024-07-31T21:03:47.082409Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.72.239:2380"}
	{"level":"info","ts":"2024-07-31T21:03:47.085498Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"97631f5e3b276dee","initial-advertise-peer-urls":["https://192.168.72.239:2380"],"listen-peer-urls":["https://192.168.72.239:2380"],"advertise-client-urls":["https://192.168.72.239:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.239:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T21:03:47.085617Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T21:03:47.728526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97631f5e3b276dee is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-31T21:03:47.728633Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97631f5e3b276dee became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-31T21:03:47.728669Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97631f5e3b276dee received MsgPreVoteResp from 97631f5e3b276dee at term 1"}
	{"level":"info","ts":"2024-07-31T21:03:47.728698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97631f5e3b276dee became candidate at term 2"}
	{"level":"info","ts":"2024-07-31T21:03:47.728723Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97631f5e3b276dee received MsgVoteResp from 97631f5e3b276dee at term 2"}
	{"level":"info","ts":"2024-07-31T21:03:47.728749Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97631f5e3b276dee became leader at term 2"}
	{"level":"info","ts":"2024-07-31T21:03:47.728784Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 97631f5e3b276dee elected leader 97631f5e3b276dee at term 2"}
	{"level":"info","ts":"2024-07-31T21:03:47.733673Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"97631f5e3b276dee","local-member-attributes":"{Name:no-preload-916885 ClientURLs:[https://192.168.72.239:2379]}","request-path":"/0/members/97631f5e3b276dee/attributes","cluster-id":"df08e509b174dc93","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T21:03:47.733902Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:03:47.734541Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:03:47.734729Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T21:03:47.734764Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T21:03:47.734821Z","caller":"etcdserver/server.go:2628","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T21:03:47.738985Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-31T21:03:47.740422Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"df08e509b174dc93","local-member-id":"97631f5e3b276dee","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T21:03:47.743758Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T21:03:47.744053Z","caller":"etcdserver/server.go:2652","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T21:03:47.74465Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T21:03:47.747323Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-31T21:03:47.750046Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.239:2379"}
	
	
	==> kernel <==
	 21:13:09 up 14 min,  0 users,  load average: 0.40, 0.52, 0.30
	Linux no-preload-916885 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9f5e4d14b86360dbe264196f7a74b632354f25386ad11233eb96c6a134d77959] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0731 21:08:50.698320       1 handler_proxy.go:99] no RequestInfo found in the context
	E0731 21:08:50.698400       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0731 21:08:50.699312       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0731 21:08:50.700503       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:09:50.699939       1 handler_proxy.go:99] no RequestInfo found in the context
	E0731 21:09:50.700247       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0731 21:09:50.701038       1 handler_proxy.go:99] no RequestInfo found in the context
	E0731 21:09:50.701087       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0731 21:09:50.702038       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0731 21:09:50.702159       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:11:50.702298       1 handler_proxy.go:99] no RequestInfo found in the context
	W0731 21:11:50.702298       1 handler_proxy.go:99] no RequestInfo found in the context
	E0731 21:11:50.702717       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0731 21:11:50.702846       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0731 21:11:50.704150       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0731 21:11:50.704200       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [a3169bc1dd903b3e599effab3c233eb4cbd4b31468090864ee6f8909cf2635b0] <==
	W0731 21:03:40.070433       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.082127       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.116844       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.139799       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.141267       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.147006       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.162824       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.195196       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.204866       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.246774       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.270780       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.311820       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.334086       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.336690       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.344346       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.418017       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.450259       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.450521       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.475056       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.506850       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.700685       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.734676       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.784396       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.857342       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:41.100790       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [01c90a9147c6edc05d39c3f78f5d2597a437b773c93a1324f79a2faa7ed03aa9] <==
	E0731 21:07:57.576714       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:07:57.712330       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:08:27.583203       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:08:27.720576       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:08:57.590195       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:08:57.730337       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 21:09:10.067879       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-916885"
	E0731 21:09:27.597084       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:09:27.738326       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 21:09:54.753515       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="306.084µs"
	E0731 21:09:57.606232       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:09:57.747099       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 21:10:07.748889       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="105.4µs"
	E0731 21:10:27.612254       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:10:27.756924       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:10:57.620996       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:10:57.765521       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:11:27.628988       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:11:27.773834       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:11:57.635891       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:11:57.783913       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:12:27.642669       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:12:27.794076       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:12:57.649803       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:12:57.803545       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [55f2f77153e80ecfd09afe39cb5e448010a1de1a1fca2bc54834e055477f5c11] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0731 21:03:58.387748       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0731 21:03:58.426812       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.72.239"]
	E0731 21:03:58.427047       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0731 21:03:58.623623       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0731 21:03:58.623674       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 21:03:58.623705       1 server_linux.go:170] "Using iptables Proxier"
	I0731 21:03:58.629824       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0731 21:03:58.630130       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0731 21:03:58.630160       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:03:58.633402       1 config.go:197] "Starting service config controller"
	I0731 21:03:58.633572       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 21:03:58.633652       1 config.go:104] "Starting endpoint slice config controller"
	I0731 21:03:58.633658       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 21:03:58.634323       1 config.go:326] "Starting node config controller"
	I0731 21:03:58.634354       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 21:03:58.734243       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 21:03:58.734361       1 shared_informer.go:320] Caches are synced for service config
	I0731 21:03:58.734684       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7d924058498dcb3735758da513f70d60e4974b432aa6434aa13593b6ff22d360] <==
	W0731 21:03:50.657367       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 21:03:50.657436       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0731 21:03:50.667601       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 21:03:50.667700       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0731 21:03:50.746634       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 21:03:50.746705       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0731 21:03:50.845686       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 21:03:50.845749       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0731 21:03:50.870199       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 21:03:50.870317       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0731 21:03:50.917156       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 21:03:50.917270       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0731 21:03:50.936319       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 21:03:50.936432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0731 21:03:50.971682       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 21:03:50.971904       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0731 21:03:50.978943       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 21:03:50.979122       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0731 21:03:51.019920       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 21:03:51.020002       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0731 21:03:51.057370       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 21:03:51.057638       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0731 21:03:51.171879       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 21:03:51.171993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0731 21:03:53.313567       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 21:10:52 no-preload-916885 kubelet[3297]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:10:52 no-preload-916885 kubelet[3297]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:10:52 no-preload-916885 kubelet[3297]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:10:52 no-preload-916885 kubelet[3297]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:10:59 no-preload-916885 kubelet[3297]: E0731 21:10:59.730913    3297 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-86m8h" podUID="3c4df12a-3d52-48dc-9998-587565d13dca"
	Jul 31 21:11:11 no-preload-916885 kubelet[3297]: E0731 21:11:11.730931    3297 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-86m8h" podUID="3c4df12a-3d52-48dc-9998-587565d13dca"
	Jul 31 21:11:23 no-preload-916885 kubelet[3297]: E0731 21:11:23.730491    3297 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-86m8h" podUID="3c4df12a-3d52-48dc-9998-587565d13dca"
	Jul 31 21:11:35 no-preload-916885 kubelet[3297]: E0731 21:11:35.731135    3297 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-86m8h" podUID="3c4df12a-3d52-48dc-9998-587565d13dca"
	Jul 31 21:11:46 no-preload-916885 kubelet[3297]: E0731 21:11:46.731888    3297 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-86m8h" podUID="3c4df12a-3d52-48dc-9998-587565d13dca"
	Jul 31 21:11:52 no-preload-916885 kubelet[3297]: E0731 21:11:52.782023    3297 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 21:11:52 no-preload-916885 kubelet[3297]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:11:52 no-preload-916885 kubelet[3297]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:11:52 no-preload-916885 kubelet[3297]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:11:52 no-preload-916885 kubelet[3297]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:11:59 no-preload-916885 kubelet[3297]: E0731 21:11:59.732561    3297 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-86m8h" podUID="3c4df12a-3d52-48dc-9998-587565d13dca"
	Jul 31 21:12:10 no-preload-916885 kubelet[3297]: E0731 21:12:10.734196    3297 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-86m8h" podUID="3c4df12a-3d52-48dc-9998-587565d13dca"
	Jul 31 21:12:24 no-preload-916885 kubelet[3297]: E0731 21:12:24.731371    3297 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-86m8h" podUID="3c4df12a-3d52-48dc-9998-587565d13dca"
	Jul 31 21:12:37 no-preload-916885 kubelet[3297]: E0731 21:12:37.730671    3297 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-86m8h" podUID="3c4df12a-3d52-48dc-9998-587565d13dca"
	Jul 31 21:12:48 no-preload-916885 kubelet[3297]: E0731 21:12:48.733193    3297 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-86m8h" podUID="3c4df12a-3d52-48dc-9998-587565d13dca"
	Jul 31 21:12:52 no-preload-916885 kubelet[3297]: E0731 21:12:52.781888    3297 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 21:12:52 no-preload-916885 kubelet[3297]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:12:52 no-preload-916885 kubelet[3297]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:12:52 no-preload-916885 kubelet[3297]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:12:52 no-preload-916885 kubelet[3297]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:13:02 no-preload-916885 kubelet[3297]: E0731 21:13:02.733222    3297 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-86m8h" podUID="3c4df12a-3d52-48dc-9998-587565d13dca"
	
	
	==> storage-provisioner [c67c2b844ed877679873aa317c98c7e636bb9c9d0f42ada12a2f9388996b9fec] <==
	I0731 21:03:59.248646       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 21:03:59.271707       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 21:03:59.271838       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 21:03:59.288164       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 21:03:59.290657       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-916885_99091271-805e-47df-97b1-345f1aaa81f8!
	I0731 21:03:59.294573       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ded94fb0-c2da-4687-a958-6ba7dca940bb", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-916885_99091271-805e-47df-97b1-345f1aaa81f8 became leader
	I0731 21:03:59.391550       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-916885_99091271-805e-47df-97b1-345f1aaa81f8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-916885 -n no-preload-916885
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-916885 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-86m8h
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-916885 describe pod metrics-server-78fcd8795b-86m8h
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-916885 describe pod metrics-server-78fcd8795b-86m8h: exit status 1 (64.239926ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-86m8h" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-916885 describe pod metrics-server-78fcd8795b-86m8h: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0731 21:04:26.620213  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/bridge-341849/client.crt: no such file or directory
E0731 21:05:09.824581  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
E0731 21:05:21.067492  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/auto-341849/client.crt: no such file or directory
E0731 21:05:37.628026  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
E0731 21:05:48.407294  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kindnet-341849/client.crt: no such file or directory
E0731 21:06:44.112608  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/auto-341849/client.crt: no such file or directory
E0731 21:07:11.453724  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kindnet-341849/client.crt: no such file or directory
E0731 21:07:11.938537  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/calico-341849/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-831240 -n embed-certs-831240
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-31 21:13:26.539495033 +0000 UTC m=+6378.789852869
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-831240 -n embed-certs-831240
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-831240 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-831240 logs -n 25: (2.133245737s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-341849 sudo                                  | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC |                     |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-341849 sudo                                  | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-831240                                  | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:50 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| ssh     | -p bridge-341849 sudo                                  | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-341849 sudo find                             | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-341849 sudo crio                             | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-341849                                       | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-248084 | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | disable-driver-mounts-248084                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:51 UTC |
	|         | default-k8s-diff-port-125614                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-831240            | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC | 31 Jul 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-831240                                  | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-916885             | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC | 31 Jul 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-916885                                   | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-125614  | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC | 31 Jul 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC |                     |
	|         | default-k8s-diff-port-125614                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-239115        | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-831240                 | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-831240                                  | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC | 31 Jul 24 21:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-916885                  | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-916885 --memory=2200                     | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 20:54 UTC | 31 Jul 24 21:04 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-125614       | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:54 UTC | 31 Jul 24 21:03 UTC |
	|         | default-k8s-diff-port-125614                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-239115                              | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:55 UTC | 31 Jul 24 20:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-239115             | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:55 UTC | 31 Jul 24 20:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-239115                              | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 20:55:13
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 20:55:13.835355  188656 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:55:13.835514  188656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:55:13.835525  188656 out.go:304] Setting ErrFile to fd 2...
	I0731 20:55:13.835531  188656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:55:13.835717  188656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 20:55:13.836233  188656 out.go:298] Setting JSON to false
	I0731 20:55:13.837146  188656 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9450,"bootTime":1722449864,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 20:55:13.837206  188656 start.go:139] virtualization: kvm guest
	I0731 20:55:13.839094  188656 out.go:177] * [old-k8s-version-239115] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 20:55:13.840630  188656 notify.go:220] Checking for updates...
	I0731 20:55:13.840638  188656 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 20:55:13.841884  188656 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 20:55:13.843054  188656 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 20:55:13.844295  188656 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 20:55:13.845348  188656 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 20:55:13.846480  188656 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 20:55:13.847974  188656 config.go:182] Loaded profile config "old-k8s-version-239115": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 20:55:13.848349  188656 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:55:13.848390  188656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:55:13.863017  188656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46065
	I0731 20:55:13.863418  188656 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:55:13.863927  188656 main.go:141] libmachine: Using API Version  1
	I0731 20:55:13.863980  188656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:55:13.864357  188656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:55:13.864625  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:55:13.866178  188656 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 20:55:13.867248  188656 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 20:55:13.867523  188656 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:55:13.867552  188656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:55:13.881922  188656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44705
	I0731 20:55:13.882304  188656 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:55:13.882707  188656 main.go:141] libmachine: Using API Version  1
	I0731 20:55:13.882729  188656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:55:13.883037  188656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:55:13.883214  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:55:13.917067  188656 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 20:55:13.918247  188656 start.go:297] selected driver: kvm2
	I0731 20:55:13.918260  188656 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-239115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:55:13.918396  188656 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 20:55:13.919323  188656 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:55:13.919428  188656 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-121704/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 20:55:13.934150  188656 install.go:137] /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0731 20:55:13.934506  188656 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 20:55:13.934569  188656 cni.go:84] Creating CNI manager for ""
	I0731 20:55:13.934583  188656 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:55:13.934630  188656 start.go:340] cluster config:
	{Name:old-k8s-version-239115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239115 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:55:13.934737  188656 iso.go:125] acquiring lock: {Name:mk69fc0fe37180b19ce91307b5e12ab2d0bd69fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:55:13.936401  188656 out.go:177] * Starting "old-k8s-version-239115" primary control-plane node in "old-k8s-version-239115" cluster
	I0731 20:55:13.769565  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:13.937700  188656 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 20:55:13.937735  188656 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0731 20:55:13.937743  188656 cache.go:56] Caching tarball of preloaded images
	I0731 20:55:13.937806  188656 preload.go:172] Found /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 20:55:13.937816  188656 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0731 20:55:13.937907  188656 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/config.json ...
	I0731 20:55:13.938068  188656 start.go:360] acquireMachinesLock for old-k8s-version-239115: {Name:mk0ee20c9dba367bb5e62f6affdfe6f589095d2a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 20:55:19.845616  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:22.917614  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:28.997601  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:32.069596  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:38.149607  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:41.221579  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:47.301587  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:50.373695  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:56.453611  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:59.525649  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:05.605640  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:08.677654  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:14.757599  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:17.829627  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:23.909581  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:26.981613  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:33.061611  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:36.133597  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:42.213638  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:45.285703  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:51.365653  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:54.437615  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:00.517627  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:03.589595  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:09.669666  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:12.741661  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:18.821643  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:21.893594  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:27.973636  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:31.045651  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:37.125619  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:40.197656  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:46.277679  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:49.349535  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:55.429634  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:58.501611  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:58:04.581620  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:58:07.653642  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:58:13.733571  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:58:16.805674  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:58:19.809697  188133 start.go:364] duration metric: took 4m15.439364065s to acquireMachinesLock for "no-preload-916885"
	I0731 20:58:19.809748  188133 start.go:96] Skipping create...Using existing machine configuration
	I0731 20:58:19.809756  188133 fix.go:54] fixHost starting: 
	I0731 20:58:19.810113  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:58:19.810149  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:58:19.825131  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40671
	I0731 20:58:19.825615  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:58:19.826110  188133 main.go:141] libmachine: Using API Version  1
	I0731 20:58:19.826132  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:58:19.826439  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:58:19.826616  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:19.826840  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetState
	I0731 20:58:19.828267  188133 fix.go:112] recreateIfNeeded on no-preload-916885: state=Stopped err=<nil>
	I0731 20:58:19.828294  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	W0731 20:58:19.828471  188133 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 20:58:19.829957  188133 out.go:177] * Restarting existing kvm2 VM for "no-preload-916885" ...
	I0731 20:58:19.807506  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:58:19.807579  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetMachineName
	I0731 20:58:19.807919  187862 buildroot.go:166] provisioning hostname "embed-certs-831240"
	I0731 20:58:19.807946  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetMachineName
	I0731 20:58:19.808126  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:58:19.809580  187862 machine.go:97] duration metric: took 4m37.431426503s to provisionDockerMachine
	I0731 20:58:19.809625  187862 fix.go:56] duration metric: took 4m37.4520345s for fixHost
	I0731 20:58:19.809631  187862 start.go:83] releasing machines lock for "embed-certs-831240", held for 4m37.452053341s
	W0731 20:58:19.809664  187862 start.go:714] error starting host: provision: host is not running
	W0731 20:58:19.809893  187862 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0731 20:58:19.809916  187862 start.go:729] Will try again in 5 seconds ...
	I0731 20:58:19.831221  188133 main.go:141] libmachine: (no-preload-916885) Calling .Start
	I0731 20:58:19.831409  188133 main.go:141] libmachine: (no-preload-916885) Ensuring networks are active...
	I0731 20:58:19.832210  188133 main.go:141] libmachine: (no-preload-916885) Ensuring network default is active
	I0731 20:58:19.832536  188133 main.go:141] libmachine: (no-preload-916885) Ensuring network mk-no-preload-916885 is active
	I0731 20:58:19.832885  188133 main.go:141] libmachine: (no-preload-916885) Getting domain xml...
	I0731 20:58:19.833563  188133 main.go:141] libmachine: (no-preload-916885) Creating domain...
	I0731 20:58:21.031310  188133 main.go:141] libmachine: (no-preload-916885) Waiting to get IP...
	I0731 20:58:21.032067  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:21.032519  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:21.032626  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:21.032509  189287 retry.go:31] will retry after 207.547113ms: waiting for machine to come up
	I0731 20:58:21.242229  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:21.242716  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:21.242797  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:21.242683  189287 retry.go:31] will retry after 307.483232ms: waiting for machine to come up
	I0731 20:58:21.552437  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:21.552954  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:21.552977  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:21.552911  189287 retry.go:31] will retry after 441.063904ms: waiting for machine to come up
	I0731 20:58:21.995514  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:21.995860  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:21.995903  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:21.995813  189287 retry.go:31] will retry after 596.915537ms: waiting for machine to come up
	I0731 20:58:22.594563  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:22.595037  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:22.595079  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:22.594988  189287 retry.go:31] will retry after 471.207023ms: waiting for machine to come up
	I0731 20:58:23.067499  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:23.067926  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:23.067950  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:23.067899  189287 retry.go:31] will retry after 756.851428ms: waiting for machine to come up
	I0731 20:58:23.826869  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:23.827277  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:23.827305  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:23.827232  189287 retry.go:31] will retry after 981.303239ms: waiting for machine to come up
	I0731 20:58:24.810830  187862 start.go:360] acquireMachinesLock for embed-certs-831240: {Name:mk0ee20c9dba367bb5e62f6affdfe6f589095d2a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 20:58:24.810239  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:24.810615  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:24.810651  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:24.810584  189287 retry.go:31] will retry after 1.18169902s: waiting for machine to come up
	I0731 20:58:25.994320  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:25.994700  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:25.994728  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:25.994635  189287 retry.go:31] will retry after 1.781207961s: waiting for machine to come up
	I0731 20:58:27.778381  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:27.778764  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:27.778805  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:27.778734  189287 retry.go:31] will retry after 1.885603462s: waiting for machine to come up
	I0731 20:58:29.665633  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:29.666049  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:29.666070  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:29.666026  189287 retry.go:31] will retry after 2.664379174s: waiting for machine to come up
	I0731 20:58:32.333226  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:32.333615  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:32.333644  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:32.333594  189287 retry.go:31] will retry after 2.932420774s: waiting for machine to come up
	I0731 20:58:35.267165  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:35.267527  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:35.267558  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:35.267496  189287 retry.go:31] will retry after 4.378841892s: waiting for machine to come up
	I0731 20:58:41.010483  188266 start.go:364] duration metric: took 4m25.11688001s to acquireMachinesLock for "default-k8s-diff-port-125614"
	I0731 20:58:41.010557  188266 start.go:96] Skipping create...Using existing machine configuration
	I0731 20:58:41.010566  188266 fix.go:54] fixHost starting: 
	I0731 20:58:41.010992  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:58:41.011033  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:58:41.030450  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41629
	I0731 20:58:41.030910  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:58:41.031360  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:58:41.031382  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:58:41.031703  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:58:41.031859  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:58:41.032020  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetState
	I0731 20:58:41.033653  188266 fix.go:112] recreateIfNeeded on default-k8s-diff-port-125614: state=Stopped err=<nil>
	I0731 20:58:41.033695  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	W0731 20:58:41.033872  188266 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 20:58:41.035898  188266 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-125614" ...
	I0731 20:58:39.650969  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.651458  188133 main.go:141] libmachine: (no-preload-916885) Found IP for machine: 192.168.72.239
	I0731 20:58:39.651475  188133 main.go:141] libmachine: (no-preload-916885) Reserving static IP address...
	I0731 20:58:39.651516  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has current primary IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.651957  188133 main.go:141] libmachine: (no-preload-916885) Reserved static IP address: 192.168.72.239
	I0731 20:58:39.651995  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "no-preload-916885", mac: "52:54:00:46:b1:6a", ip: "192.168.72.239"} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:39.652023  188133 main.go:141] libmachine: (no-preload-916885) Waiting for SSH to be available...
	I0731 20:58:39.652054  188133 main.go:141] libmachine: (no-preload-916885) DBG | skip adding static IP to network mk-no-preload-916885 - found existing host DHCP lease matching {name: "no-preload-916885", mac: "52:54:00:46:b1:6a", ip: "192.168.72.239"}
	I0731 20:58:39.652073  188133 main.go:141] libmachine: (no-preload-916885) DBG | Getting to WaitForSSH function...
	I0731 20:58:39.654095  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.654450  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:39.654479  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.654636  188133 main.go:141] libmachine: (no-preload-916885) DBG | Using SSH client type: external
	I0731 20:58:39.654659  188133 main.go:141] libmachine: (no-preload-916885) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa (-rw-------)
	I0731 20:58:39.654714  188133 main.go:141] libmachine: (no-preload-916885) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.239 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:58:39.654729  188133 main.go:141] libmachine: (no-preload-916885) DBG | About to run SSH command:
	I0731 20:58:39.654768  188133 main.go:141] libmachine: (no-preload-916885) DBG | exit 0
	I0731 20:58:39.781409  188133 main.go:141] libmachine: (no-preload-916885) DBG | SSH cmd err, output: <nil>: 
	I0731 20:58:39.781741  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetConfigRaw
	I0731 20:58:39.782349  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetIP
	I0731 20:58:39.784813  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.785234  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:39.785266  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.785643  188133 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/config.json ...
	I0731 20:58:39.785859  188133 machine.go:94] provisionDockerMachine start ...
	I0731 20:58:39.785879  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:39.786095  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:39.788573  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.788840  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:39.788868  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.789025  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:39.789203  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:39.789374  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:39.789495  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:39.789661  188133 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:39.789927  188133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.239 22 <nil> <nil>}
	I0731 20:58:39.789941  188133 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 20:58:39.901661  188133 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 20:58:39.901687  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetMachineName
	I0731 20:58:39.901920  188133 buildroot.go:166] provisioning hostname "no-preload-916885"
	I0731 20:58:39.901953  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetMachineName
	I0731 20:58:39.902142  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:39.904763  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.905159  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:39.905186  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.905347  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:39.905534  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:39.905698  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:39.905822  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:39.905977  188133 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:39.906137  188133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.239 22 <nil> <nil>}
	I0731 20:58:39.906155  188133 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-916885 && echo "no-preload-916885" | sudo tee /etc/hostname
	I0731 20:58:40.030955  188133 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-916885
	
	I0731 20:58:40.030979  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.033905  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.034254  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.034276  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.034487  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:40.034693  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.034868  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.035014  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:40.035197  188133 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:40.035373  188133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.239 22 <nil> <nil>}
	I0731 20:58:40.035392  188133 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-916885' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-916885/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-916885' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:58:40.154331  188133 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:58:40.154381  188133 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 20:58:40.154436  188133 buildroot.go:174] setting up certificates
	I0731 20:58:40.154452  188133 provision.go:84] configureAuth start
	I0731 20:58:40.154474  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetMachineName
	I0731 20:58:40.154813  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetIP
	I0731 20:58:40.157702  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.158053  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.158075  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.158218  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.160715  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.161030  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.161048  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.161186  188133 provision.go:143] copyHostCerts
	I0731 20:58:40.161258  188133 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 20:58:40.161267  188133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 20:58:40.161372  188133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 20:58:40.161477  188133 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 20:58:40.161487  188133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 20:58:40.161520  188133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 20:58:40.161590  188133 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 20:58:40.161606  188133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 20:58:40.161639  188133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 20:58:40.161700  188133 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.no-preload-916885 san=[127.0.0.1 192.168.72.239 localhost minikube no-preload-916885]
	I0731 20:58:40.341529  188133 provision.go:177] copyRemoteCerts
	I0731 20:58:40.341586  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:58:40.341612  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.344557  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.344851  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.344871  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.345080  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:40.345266  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.345432  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:40.345677  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 20:58:40.431395  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:58:40.455012  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 20:58:40.477721  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 20:58:40.500174  188133 provision.go:87] duration metric: took 345.705192ms to configureAuth
	I0731 20:58:40.500203  188133 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:58:40.500377  188133 config.go:182] Loaded profile config "no-preload-916885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 20:58:40.500462  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.503077  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.503438  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.503467  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.503586  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:40.503780  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.503947  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.504065  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:40.504245  188133 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:40.504467  188133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.239 22 <nil> <nil>}
	I0731 20:58:40.504489  188133 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:58:40.765409  188133 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:58:40.765448  188133 machine.go:97] duration metric: took 979.574417ms to provisionDockerMachine
	I0731 20:58:40.765460  188133 start.go:293] postStartSetup for "no-preload-916885" (driver="kvm2")
	I0731 20:58:40.765474  188133 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:58:40.765525  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:40.765895  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:58:40.765928  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.768314  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.768610  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.768657  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.768760  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:40.768926  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.769089  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:40.769199  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 20:58:40.855821  188133 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:58:40.860032  188133 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:58:40.860071  188133 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 20:58:40.860148  188133 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 20:58:40.860251  188133 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 20:58:40.860367  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:58:40.869291  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:58:40.892945  188133 start.go:296] duration metric: took 127.469545ms for postStartSetup
	I0731 20:58:40.892991  188133 fix.go:56] duration metric: took 21.083232755s for fixHost
	I0731 20:58:40.893019  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.895784  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.896166  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.896197  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.896316  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:40.896501  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.896654  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.896777  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:40.896964  188133 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:40.897133  188133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.239 22 <nil> <nil>}
	I0731 20:58:40.897143  188133 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:58:41.010330  188133 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722459520.969906971
	
	I0731 20:58:41.010352  188133 fix.go:216] guest clock: 1722459520.969906971
	I0731 20:58:41.010360  188133 fix.go:229] Guest: 2024-07-31 20:58:40.969906971 +0000 UTC Remote: 2024-07-31 20:58:40.892995844 +0000 UTC m=+276.656012666 (delta=76.911127ms)
	I0731 20:58:41.010390  188133 fix.go:200] guest clock delta is within tolerance: 76.911127ms
	I0731 20:58:41.010396  188133 start.go:83] releasing machines lock for "no-preload-916885", held for 21.200662427s
	I0731 20:58:41.010429  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:41.010733  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetIP
	I0731 20:58:41.013519  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.013841  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:41.013867  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.014034  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:41.014637  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:41.014829  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:41.014914  188133 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:58:41.014974  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:41.015051  188133 ssh_runner.go:195] Run: cat /version.json
	I0731 20:58:41.015074  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:41.017813  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.017837  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.018170  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:41.018205  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:41.018225  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.018239  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.018482  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:41.018493  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:41.018678  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:41.018694  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:41.018862  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:41.018885  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:41.018965  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 20:58:41.019040  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 20:58:41.107999  188133 ssh_runner.go:195] Run: systemctl --version
	I0731 20:58:41.133039  188133 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:58:41.279485  188133 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:58:41.285765  188133 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:58:41.285838  188133 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:58:41.302175  188133 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:58:41.302203  188133 start.go:495] detecting cgroup driver to use...
	I0731 20:58:41.302280  188133 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:58:41.319896  188133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:58:41.334618  188133 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:58:41.334689  188133 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:58:41.348292  188133 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:58:41.363968  188133 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:58:41.472992  188133 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:58:41.605581  188133 docker.go:233] disabling docker service ...
	I0731 20:58:41.605669  188133 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:58:41.620414  188133 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:58:41.632951  188133 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:58:41.783942  188133 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:58:41.912311  188133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:58:41.931076  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:58:41.954672  188133 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0731 20:58:41.954752  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:41.967478  188133 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:58:41.967567  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:41.978990  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:41.991689  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:42.003168  188133 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:58:42.019114  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:42.034607  188133 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:42.057543  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:42.070420  188133 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:58:42.081173  188133 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:58:42.081245  188133 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:58:42.095455  188133 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:58:42.106943  188133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:58:42.221724  188133 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:58:42.375966  188133 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:58:42.376051  188133 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:58:42.381473  188133 start.go:563] Will wait 60s for crictl version
	I0731 20:58:42.381548  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:42.385364  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:58:42.426783  188133 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:58:42.426872  188133 ssh_runner.go:195] Run: crio --version
	I0731 20:58:42.459096  188133 ssh_runner.go:195] Run: crio --version
	I0731 20:58:42.490043  188133 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0731 20:58:42.491578  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetIP
	I0731 20:58:42.494915  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:42.495289  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:42.495310  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:42.495610  188133 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0731 20:58:42.500266  188133 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:58:42.515164  188133 kubeadm.go:883] updating cluster {Name:no-preload-916885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-916885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.239 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:58:42.515295  188133 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 20:58:42.515332  188133 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:58:42.551930  188133 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0731 20:58:42.551961  188133 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 20:58:42.552025  188133 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:58:42.552047  188133 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0731 20:58:42.552067  188133 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 20:58:42.552087  188133 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 20:58:42.552071  188133 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 20:58:42.552028  188133 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 20:58:42.552129  188133 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 20:58:42.552035  188133 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0731 20:58:42.554026  188133 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 20:58:42.554044  188133 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 20:58:42.554103  188133 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0731 20:58:42.554112  188133 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0731 20:58:42.554123  188133 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:58:42.554030  188133 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 20:58:42.554032  188133 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 20:58:42.554027  188133 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 20:58:42.721659  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0731 20:58:42.743910  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0731 20:58:42.750941  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0731 20:58:42.772074  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 20:58:42.781921  188133 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0731 20:58:42.781964  188133 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 20:58:42.782014  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:42.793926  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 20:58:42.813112  188133 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0731 20:58:42.813154  188133 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0731 20:58:42.813202  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:42.916544  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 20:58:42.937647  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 20:58:42.948145  188133 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0731 20:58:42.948194  188133 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 20:58:42.948208  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0731 20:58:42.948237  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:42.948268  188133 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0731 20:58:42.948300  188133 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 20:58:42.948338  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0731 20:58:42.948341  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:43.006187  188133 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0731 20:58:43.006238  188133 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 20:58:43.006295  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:43.045484  188133 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0731 20:58:43.045541  188133 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 20:58:43.045585  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:43.045589  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 20:58:43.045643  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0731 20:58:43.045710  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0731 20:58:43.045730  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0731 20:58:43.045741  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 20:58:43.045780  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 20:58:43.045823  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0731 20:58:43.122382  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0731 20:58:43.122429  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0731 20:58:43.122449  188133 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0731 20:58:43.122489  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 20:58:43.122497  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0731 20:58:43.122513  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0731 20:58:43.122517  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0731 20:58:43.122588  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 20:58:43.122637  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 20:58:43.122643  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0731 20:58:43.122731  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 20:58:43.522969  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:58:41.037393  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Start
	I0731 20:58:41.037575  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Ensuring networks are active...
	I0731 20:58:41.038366  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Ensuring network default is active
	I0731 20:58:41.038703  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Ensuring network mk-default-k8s-diff-port-125614 is active
	I0731 20:58:41.039402  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Getting domain xml...
	I0731 20:58:41.040218  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Creating domain...
	I0731 20:58:42.319123  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting to get IP...
	I0731 20:58:42.320314  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.320801  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.320908  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:42.320797  189429 retry.go:31] will retry after 274.801111ms: waiting for machine to come up
	I0731 20:58:42.597444  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.597866  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.597914  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:42.597842  189429 retry.go:31] will retry after 382.328248ms: waiting for machine to come up
	I0731 20:58:42.981533  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.982018  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.982051  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:42.981955  189429 retry.go:31] will retry after 426.247953ms: waiting for machine to come up
	I0731 20:58:43.409523  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:43.409839  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:43.409867  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:43.409795  189429 retry.go:31] will retry after 483.501118ms: waiting for machine to come up
	I0731 20:58:43.894451  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:43.894844  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:43.894874  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:43.894779  189429 retry.go:31] will retry after 759.968593ms: waiting for machine to come up
	I0731 20:58:44.656097  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:44.656551  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:44.656580  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:44.656503  189429 retry.go:31] will retry after 766.563008ms: waiting for machine to come up
	I0731 20:58:45.424382  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:45.424793  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:45.424831  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:45.424744  189429 retry.go:31] will retry after 1.172047019s: waiting for machine to come up
	I0731 20:58:45.107333  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.984807614s)
	I0731 20:58:45.107368  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0731 20:58:45.107393  188133 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0731 20:58:45.107452  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0731 20:58:45.107471  188133 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0: (1.98485492s)
	I0731 20:58:45.107523  188133 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.985012474s)
	I0731 20:58:45.107534  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0731 20:58:45.107560  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0731 20:58:45.107563  188133 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.984910291s)
	I0731 20:58:45.107585  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0731 20:58:45.107609  188133 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.984862504s)
	I0731 20:58:45.107619  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 20:58:45.107626  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0731 20:58:45.107668  188133 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.584674739s)
	I0731 20:58:45.107701  188133 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0731 20:58:45.107729  188133 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:58:45.107761  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:48.706832  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.599347822s)
	I0731 20:58:48.706872  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0731 20:58:48.706886  188133 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (3.599247467s)
	I0731 20:58:48.706923  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0731 20:58:48.706898  188133 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 20:58:48.706925  188133 ssh_runner.go:235] Completed: which crictl: (3.599146318s)
	I0731 20:58:48.706979  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:58:48.706980  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 20:58:48.747292  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 20:58:48.747415  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0731 20:58:46.598636  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:46.599086  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:46.599117  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:46.599033  189429 retry.go:31] will retry after 1.204122239s: waiting for machine to come up
	I0731 20:58:47.805441  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:47.805922  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:47.805953  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:47.805864  189429 retry.go:31] will retry after 1.466632244s: waiting for machine to come up
	I0731 20:58:49.274527  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:49.275004  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:49.275030  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:49.274961  189429 retry.go:31] will retry after 2.04848438s: waiting for machine to come up
	I0731 20:58:50.902082  188133 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.154633427s)
	I0731 20:58:50.902138  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0731 20:58:50.902203  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.195118092s)
	I0731 20:58:50.902226  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0731 20:58:50.902259  188133 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 20:58:50.902320  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 20:58:52.863335  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.960989386s)
	I0731 20:58:52.863370  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0731 20:58:52.863394  188133 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 20:58:52.863434  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 20:58:51.324633  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:51.325056  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:51.325080  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:51.324983  189429 retry.go:31] will retry after 1.991151757s: waiting for machine to come up
	I0731 20:58:53.318784  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:53.319133  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:53.319164  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:53.319077  189429 retry.go:31] will retry after 2.631932264s: waiting for machine to come up
	I0731 20:58:54.629811  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.766355185s)
	I0731 20:58:54.629840  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0731 20:58:54.629882  188133 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 20:58:54.629954  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 20:58:55.983610  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.353622135s)
	I0731 20:58:55.983655  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0731 20:58:55.983692  188133 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0731 20:58:55.983764  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0731 20:58:56.828512  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0731 20:58:56.828560  188133 cache_images.go:123] Successfully loaded all cached images
	I0731 20:58:56.828568  188133 cache_images.go:92] duration metric: took 14.276593942s to LoadCachedImages
	I0731 20:58:56.828583  188133 kubeadm.go:934] updating node { 192.168.72.239 8443 v1.31.0-beta.0 crio true true} ...
	I0731 20:58:56.828722  188133 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-916885 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.239
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-916885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:58:56.828806  188133 ssh_runner.go:195] Run: crio config
	I0731 20:58:56.877187  188133 cni.go:84] Creating CNI manager for ""
	I0731 20:58:56.877222  188133 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:58:56.877245  188133 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:58:56.877269  188133 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.239 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-916885 NodeName:no-preload-916885 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.239"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.239 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 20:58:56.877442  188133 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.239
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-916885"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.239
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.239"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:58:56.877526  188133 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0731 20:58:56.887721  188133 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:58:56.887796  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 20:58:56.896845  188133 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0731 20:58:56.912886  188133 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0731 20:58:56.928914  188133 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0731 20:58:56.945604  188133 ssh_runner.go:195] Run: grep 192.168.72.239	control-plane.minikube.internal$ /etc/hosts
	I0731 20:58:56.949538  188133 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.239	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:58:56.961490  188133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:58:57.075114  188133 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:58:57.091701  188133 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885 for IP: 192.168.72.239
	I0731 20:58:57.091724  188133 certs.go:194] generating shared ca certs ...
	I0731 20:58:57.091743  188133 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:58:57.091909  188133 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 20:58:57.091959  188133 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 20:58:57.091971  188133 certs.go:256] generating profile certs ...
	I0731 20:58:57.092062  188133 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/client.key
	I0731 20:58:57.092141  188133 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/apiserver.key.cc7e9c96
	I0731 20:58:57.092193  188133 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/proxy-client.key
	I0731 20:58:57.092330  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 20:58:57.092405  188133 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 20:58:57.092423  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 20:58:57.092458  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:58:57.092489  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:58:57.092520  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 20:58:57.092586  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:58:57.093296  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:58:57.139431  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:58:57.169132  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:58:57.196541  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 20:58:57.232826  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 20:58:57.260875  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 20:58:57.290195  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:58:57.316645  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 20:58:57.339741  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:58:57.362406  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 20:58:57.385009  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 20:58:57.407540  188133 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:58:57.423697  188133 ssh_runner.go:195] Run: openssl version
	I0731 20:58:57.429741  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:58:57.440545  188133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:58:57.444984  188133 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:58:57.445035  188133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:58:57.450651  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:58:57.460547  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 20:58:57.470575  188133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 20:58:57.474939  188133 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 20:58:57.474988  188133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 20:58:57.480481  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 20:58:57.490404  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 20:58:57.500433  188133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 20:58:57.504785  188133 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 20:58:57.504835  188133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 20:58:57.510165  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:58:57.520019  188133 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:58:57.524596  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 20:58:57.530667  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 20:58:57.536315  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 20:58:57.542049  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 20:58:57.547594  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 20:58:57.553084  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 20:58:57.558419  188133 kubeadm.go:392] StartCluster: {Name:no-preload-916885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-916885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.239 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:58:57.558501  188133 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:58:57.558537  188133 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:58:57.600004  188133 cri.go:89] found id: ""
	I0731 20:58:57.600087  188133 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 20:58:57.609911  188133 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 20:58:57.609933  188133 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 20:58:57.609975  188133 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 20:58:57.619498  188133 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:58:57.621885  188133 kubeconfig.go:125] found "no-preload-916885" server: "https://192.168.72.239:8443"
	I0731 20:58:57.624838  188133 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 20:58:57.633984  188133 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.239
	I0731 20:58:57.634025  188133 kubeadm.go:1160] stopping kube-system containers ...
	I0731 20:58:57.634037  188133 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 20:58:57.634080  188133 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:58:57.672988  188133 cri.go:89] found id: ""
	I0731 20:58:57.673053  188133 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 20:58:57.689149  188133 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:58:57.698520  188133 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:58:57.698541  188133 kubeadm.go:157] found existing configuration files:
	
	I0731 20:58:57.698595  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 20:58:57.707106  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:58:57.707163  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:58:57.715878  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 20:58:57.724169  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:58:57.724219  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:58:57.732890  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 20:58:57.741121  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:58:57.741174  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:58:57.749776  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 20:58:57.758063  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:58:57.758114  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:58:57.766815  188133 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 20:58:57.775595  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:58:57.883689  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:58:58.740684  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:58:58.926231  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:58:58.987089  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:58:59.049782  188133 api_server.go:52] waiting for apiserver process to appear ...
	I0731 20:58:59.049862  188133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:00.418227  188656 start.go:364] duration metric: took 3m46.480116699s to acquireMachinesLock for "old-k8s-version-239115"
	I0731 20:59:00.418294  188656 start.go:96] Skipping create...Using existing machine configuration
	I0731 20:59:00.418302  188656 fix.go:54] fixHost starting: 
	I0731 20:59:00.418738  188656 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:00.418773  188656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:00.438533  188656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37127
	I0731 20:59:00.438963  188656 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:00.439499  188656 main.go:141] libmachine: Using API Version  1
	I0731 20:59:00.439524  188656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:00.439930  188656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:00.441449  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:00.441651  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetState
	I0731 20:59:00.443465  188656 fix.go:112] recreateIfNeeded on old-k8s-version-239115: state=Stopped err=<nil>
	I0731 20:59:00.443505  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	W0731 20:59:00.443679  188656 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 20:59:00.445840  188656 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-239115" ...
	I0731 20:58:55.953940  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:55.954393  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:55.954422  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:55.954356  189429 retry.go:31] will retry after 3.068212527s: waiting for machine to come up
	I0731 20:58:59.025966  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.026388  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has current primary IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.026406  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Found IP for machine: 192.168.50.221
	I0731 20:58:59.026417  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Reserving static IP address...
	I0731 20:58:59.026867  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Reserved static IP address: 192.168.50.221
	I0731 20:58:59.026918  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-125614", mac: "52:54:00:c8:c7:f0", ip: "192.168.50.221"} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.026933  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for SSH to be available...
	I0731 20:58:59.026954  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | skip adding static IP to network mk-default-k8s-diff-port-125614 - found existing host DHCP lease matching {name: "default-k8s-diff-port-125614", mac: "52:54:00:c8:c7:f0", ip: "192.168.50.221"}
	I0731 20:58:59.026972  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Getting to WaitForSSH function...
	I0731 20:58:59.029330  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.029685  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.029720  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.029820  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Using SSH client type: external
	I0731 20:58:59.029846  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa (-rw-------)
	I0731 20:58:59.029877  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.221 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:58:59.029894  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | About to run SSH command:
	I0731 20:58:59.029906  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | exit 0
	I0731 20:58:59.161209  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | SSH cmd err, output: <nil>: 
	I0731 20:58:59.161713  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetConfigRaw
	I0731 20:58:59.162331  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetIP
	I0731 20:58:59.164645  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.164953  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.164986  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.165269  188266 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/config.json ...
	I0731 20:58:59.165479  188266 machine.go:94] provisionDockerMachine start ...
	I0731 20:58:59.165503  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:58:59.165692  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.167796  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.168065  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.168110  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.168247  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:58:59.168408  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.168626  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.168763  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:58:59.168901  188266 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:59.169103  188266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0731 20:58:59.169115  188266 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 20:58:59.281875  188266 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 20:58:59.281901  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetMachineName
	I0731 20:58:59.282185  188266 buildroot.go:166] provisioning hostname "default-k8s-diff-port-125614"
	I0731 20:58:59.282218  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetMachineName
	I0731 20:58:59.282460  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.284970  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.285461  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.285498  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.285612  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:58:59.285814  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.286004  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.286139  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:58:59.286278  188266 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:59.286445  188266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0731 20:58:59.286460  188266 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-125614 && echo "default-k8s-diff-port-125614" | sudo tee /etc/hostname
	I0731 20:58:59.411873  188266 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-125614
	
	I0731 20:58:59.411904  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.414733  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.415065  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.415099  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.415274  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:58:59.415463  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.415604  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.415751  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:58:59.415898  188266 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:59.416074  188266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0731 20:58:59.416090  188266 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-125614' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-125614/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-125614' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:58:59.539168  188266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:58:59.539210  188266 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 20:58:59.539247  188266 buildroot.go:174] setting up certificates
	I0731 20:58:59.539256  188266 provision.go:84] configureAuth start
	I0731 20:58:59.539267  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetMachineName
	I0731 20:58:59.539595  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetIP
	I0731 20:58:59.542447  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.542887  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.542916  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.543103  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.545597  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.545972  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.545992  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.546128  188266 provision.go:143] copyHostCerts
	I0731 20:58:59.546195  188266 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 20:58:59.546206  188266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 20:58:59.546265  188266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 20:58:59.546366  188266 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 20:58:59.546377  188266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 20:58:59.546407  188266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 20:58:59.546488  188266 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 20:58:59.546492  188266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 20:58:59.546517  188266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 20:58:59.546565  188266 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-125614 san=[127.0.0.1 192.168.50.221 default-k8s-diff-port-125614 localhost minikube]
	I0731 20:58:59.690753  188266 provision.go:177] copyRemoteCerts
	I0731 20:58:59.690811  188266 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:58:59.690839  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.693800  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.694141  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.694175  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.694383  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:58:59.694583  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.694748  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:58:59.694884  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:58:59.783710  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:58:59.814512  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0731 20:58:59.843492  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 20:58:59.867793  188266 provision.go:87] duration metric: took 328.521723ms to configureAuth
	I0731 20:58:59.867840  188266 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:58:59.868013  188266 config.go:182] Loaded profile config "default-k8s-diff-port-125614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:58:59.868089  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.871214  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.871655  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.871684  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.871875  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:58:59.872127  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.872309  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.872503  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:58:59.872687  188266 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:59.872909  188266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0731 20:58:59.872935  188266 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:59:00.165458  188266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:59:00.165492  188266 machine.go:97] duration metric: took 999.996831ms to provisionDockerMachine
	I0731 20:59:00.165509  188266 start.go:293] postStartSetup for "default-k8s-diff-port-125614" (driver="kvm2")
	I0731 20:59:00.165527  188266 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:59:00.165549  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:00.165936  188266 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:59:00.165973  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:00.168477  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.168837  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:00.168864  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.168991  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:00.169203  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:00.169387  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:00.169575  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:00.262132  188266 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:59:00.266596  188266 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:59:00.266621  188266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 20:59:00.266695  188266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 20:59:00.266789  188266 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 20:59:00.266909  188266 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:59:00.276407  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:00.300017  188266 start.go:296] duration metric: took 134.490488ms for postStartSetup
	I0731 20:59:00.300061  188266 fix.go:56] duration metric: took 19.289494966s for fixHost
	I0731 20:59:00.300087  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:00.302714  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.303073  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:00.303106  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.303249  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:00.303448  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:00.303633  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:00.303786  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:00.303978  188266 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:00.304204  188266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0731 20:59:00.304217  188266 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:59:00.418073  188266 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722459540.389901096
	
	I0731 20:59:00.418096  188266 fix.go:216] guest clock: 1722459540.389901096
	I0731 20:59:00.418105  188266 fix.go:229] Guest: 2024-07-31 20:59:00.389901096 +0000 UTC Remote: 2024-07-31 20:59:00.30006642 +0000 UTC m=+284.542031804 (delta=89.834676ms)
	I0731 20:59:00.418130  188266 fix.go:200] guest clock delta is within tolerance: 89.834676ms
	I0731 20:59:00.418138  188266 start.go:83] releasing machines lock for "default-k8s-diff-port-125614", held for 19.407605953s
	I0731 20:59:00.418167  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:00.418669  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetIP
	I0731 20:59:00.421683  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.422050  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:00.422090  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.422234  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:00.422799  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:00.422999  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:00.423061  188266 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:59:00.423119  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:00.423354  188266 ssh_runner.go:195] Run: cat /version.json
	I0731 20:59:00.423378  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:00.426188  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.426362  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.426603  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:00.426631  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.426790  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:00.426882  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:00.426929  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.427019  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:00.427197  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:00.427208  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:00.427363  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:00.427380  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:00.427523  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:00.427668  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:00.511834  188266 ssh_runner.go:195] Run: systemctl --version
	I0731 20:59:00.536649  188266 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:59:00.692463  188266 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:59:00.700344  188266 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:59:00.700413  188266 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:59:00.721837  188266 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:59:00.721863  188266 start.go:495] detecting cgroup driver to use...
	I0731 20:59:00.721940  188266 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:59:00.742477  188266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:59:00.760049  188266 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:59:00.760120  188266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:59:00.777823  188266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:59:00.791680  188266 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:59:00.908094  188266 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:59:01.051284  188266 docker.go:233] disabling docker service ...
	I0731 20:59:01.051379  188266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:59:01.070927  188266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:59:01.083393  188266 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:59:01.223186  188266 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:59:01.355265  188266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:59:01.369810  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:59:01.390523  188266 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 20:59:01.390588  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.401241  188266 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:59:01.401308  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.412049  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.422145  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.432523  188266 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:59:01.442640  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.456933  188266 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.475628  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.486226  188266 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:59:01.496757  188266 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:59:01.496813  188266 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:59:01.510264  188266 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:59:01.520231  188266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:01.636451  188266 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:59:01.784134  188266 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:59:01.784220  188266 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:59:01.788836  188266 start.go:563] Will wait 60s for crictl version
	I0731 20:59:01.788895  188266 ssh_runner.go:195] Run: which crictl
	I0731 20:59:01.793059  188266 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:59:01.840110  188266 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:59:01.840200  188266 ssh_runner.go:195] Run: crio --version
	I0731 20:59:01.868816  188266 ssh_runner.go:195] Run: crio --version
	I0731 20:59:01.908539  188266 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 20:59:00.447208  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .Start
	I0731 20:59:00.447389  188656 main.go:141] libmachine: (old-k8s-version-239115) Ensuring networks are active...
	I0731 20:59:00.448116  188656 main.go:141] libmachine: (old-k8s-version-239115) Ensuring network default is active
	I0731 20:59:00.448589  188656 main.go:141] libmachine: (old-k8s-version-239115) Ensuring network mk-old-k8s-version-239115 is active
	I0731 20:59:00.448892  188656 main.go:141] libmachine: (old-k8s-version-239115) Getting domain xml...
	I0731 20:59:00.450110  188656 main.go:141] libmachine: (old-k8s-version-239115) Creating domain...
	I0731 20:59:01.823554  188656 main.go:141] libmachine: (old-k8s-version-239115) Waiting to get IP...
	I0731 20:59:01.824648  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:01.825111  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:01.825172  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:01.825080  189574 retry.go:31] will retry after 241.700507ms: waiting for machine to come up
	I0731 20:59:02.068913  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:02.069608  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:02.069738  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:02.069663  189574 retry.go:31] will retry after 258.921821ms: waiting for machine to come up
	I0731 20:59:02.330231  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:02.330846  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:02.330877  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:02.330776  189574 retry.go:31] will retry after 460.911793ms: waiting for machine to come up
	I0731 20:59:02.793718  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:02.794177  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:02.794206  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:02.794156  189574 retry.go:31] will retry after 380.241989ms: waiting for machine to come up
	I0731 20:59:03.175918  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:03.176761  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:03.176786  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:03.176670  189574 retry.go:31] will retry after 631.876736ms: waiting for machine to come up
	I0731 20:59:03.810803  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:03.811478  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:03.811503  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:03.811366  189574 retry.go:31] will retry after 583.328017ms: waiting for machine to come up
	I0731 20:58:59.550347  188133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:00.050077  188133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:00.066942  188133 api_server.go:72] duration metric: took 1.017157745s to wait for apiserver process to appear ...
	I0731 20:59:00.066991  188133 api_server.go:88] waiting for apiserver healthz status ...
	I0731 20:59:00.067016  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:00.067685  188133 api_server.go:269] stopped: https://192.168.72.239:8443/healthz: Get "https://192.168.72.239:8443/healthz": dial tcp 192.168.72.239:8443: connect: connection refused
	I0731 20:59:00.567237  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:03.555694  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:03.555739  188133 api_server.go:103] status: https://192.168.72.239:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:03.555756  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:03.606602  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:03.606641  188133 api_server.go:103] status: https://192.168.72.239:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:03.606657  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:03.617900  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:03.617935  188133 api_server.go:103] status: https://192.168.72.239:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:04.067724  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:04.073838  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:04.073875  188133 api_server.go:103] status: https://192.168.72.239:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:04.568116  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:04.575013  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:04.575044  188133 api_server.go:103] status: https://192.168.72.239:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:05.067154  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:05.073314  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 200:
	ok
	I0731 20:59:05.083559  188133 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 20:59:05.083595  188133 api_server.go:131] duration metric: took 5.016595337s to wait for apiserver health ...
	I0731 20:59:05.083606  188133 cni.go:84] Creating CNI manager for ""
	I0731 20:59:05.083614  188133 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:05.085564  188133 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 20:59:01.910091  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetIP
	I0731 20:59:01.913322  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:01.913714  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:01.913747  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:01.914046  188266 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0731 20:59:01.918504  188266 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:01.930599  188266 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-125614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-125614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:59:01.930756  188266 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:59:01.930826  188266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:01.968796  188266 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 20:59:01.968882  188266 ssh_runner.go:195] Run: which lz4
	I0731 20:59:01.974123  188266 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 20:59:01.979542  188266 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 20:59:01.979575  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 20:59:03.529579  188266 crio.go:462] duration metric: took 1.555502358s to copy over tarball
	I0731 20:59:03.529662  188266 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 20:59:04.395886  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:04.396400  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:04.396664  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:04.396347  189574 retry.go:31] will retry after 1.154504022s: waiting for machine to come up
	I0731 20:59:05.552240  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:05.552879  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:05.552901  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:05.552831  189574 retry.go:31] will retry after 1.037365333s: waiting for machine to come up
	I0731 20:59:06.591875  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:06.592416  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:06.592450  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:06.592329  189574 retry.go:31] will retry after 1.249444079s: waiting for machine to come up
	I0731 20:59:07.843058  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:07.843436  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:07.843463  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:07.843370  189574 retry.go:31] will retry after 1.700521776s: waiting for machine to come up
	I0731 20:59:05.087080  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 20:59:05.105303  188133 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 20:59:05.125019  188133 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 20:59:05.136768  188133 system_pods.go:59] 8 kube-system pods found
	I0731 20:59:05.136823  188133 system_pods.go:61] "coredns-5cfdc65f69-c9gcf" [3b9458d3-81d0-4138-8a6a-81f087c3ed02] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 20:59:05.136836  188133 system_pods.go:61] "etcd-no-preload-916885" [aa31006d-8e74-48c2-9b5d-5604b3a1c283] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 20:59:05.136847  188133 system_pods.go:61] "kube-apiserver-no-preload-916885" [64549ba0-8e30-41d3-82eb-cdb729623a9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 20:59:05.136856  188133 system_pods.go:61] "kube-controller-manager-no-preload-916885" [2620c741-c27a-4df5-8555-58767d43c675] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 20:59:05.136866  188133 system_pods.go:61] "kube-proxy-99jgm" [0060c1a0-badc-401c-a4dc-5cfaa958654e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 20:59:05.136880  188133 system_pods.go:61] "kube-scheduler-no-preload-916885" [f02a0a1d-5cbb-4ee3-a084-21710667565e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 20:59:05.136894  188133 system_pods.go:61] "metrics-server-78fcd8795b-jrzgg" [acbe48be-32e9-44f8-9bf2-52e0e92a09e0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 20:59:05.136912  188133 system_pods.go:61] "storage-provisioner" [d0f902cd-d1db-4c70-bdad-34bda917cec1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 20:59:05.136926  188133 system_pods.go:74] duration metric: took 11.882384ms to wait for pod list to return data ...
	I0731 20:59:05.136937  188133 node_conditions.go:102] verifying NodePressure condition ...
	I0731 20:59:05.142117  188133 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 20:59:05.142149  188133 node_conditions.go:123] node cpu capacity is 2
	I0731 20:59:05.142165  188133 node_conditions.go:105] duration metric: took 5.221098ms to run NodePressure ...
	I0731 20:59:05.142187  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:05.534597  188133 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 20:59:05.539583  188133 kubeadm.go:739] kubelet initialised
	I0731 20:59:05.539604  188133 kubeadm.go:740] duration metric: took 4.980297ms waiting for restarted kubelet to initialise ...
	I0731 20:59:05.539626  188133 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:59:05.544498  188133 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-c9gcf" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:07.778624  188133 pod_ready.go:102] pod "coredns-5cfdc65f69-c9gcf" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:06.024682  188266 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.494984583s)
	I0731 20:59:06.024718  188266 crio.go:469] duration metric: took 2.495107603s to extract the tarball
	I0731 20:59:06.024729  188266 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 20:59:06.062675  188266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:06.107619  188266 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 20:59:06.107649  188266 cache_images.go:84] Images are preloaded, skipping loading
	I0731 20:59:06.107667  188266 kubeadm.go:934] updating node { 192.168.50.221 8444 v1.30.3 crio true true} ...
	I0731 20:59:06.107792  188266 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-125614 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-125614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:59:06.107863  188266 ssh_runner.go:195] Run: crio config
	I0731 20:59:06.173983  188266 cni.go:84] Creating CNI manager for ""
	I0731 20:59:06.174007  188266 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:06.174019  188266 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:59:06.174040  188266 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.221 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-125614 NodeName:default-k8s-diff-port-125614 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 20:59:06.174168  188266 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.221
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-125614"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.221
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.221"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:59:06.174233  188266 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 20:59:06.185059  188266 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:59:06.185189  188266 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 20:59:06.196571  188266 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0731 20:59:06.218964  188266 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:59:06.239033  188266 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0731 20:59:06.260519  188266 ssh_runner.go:195] Run: grep 192.168.50.221	control-plane.minikube.internal$ /etc/hosts
	I0731 20:59:06.264718  188266 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.221	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:06.278173  188266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:06.423941  188266 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:59:06.441663  188266 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614 for IP: 192.168.50.221
	I0731 20:59:06.441689  188266 certs.go:194] generating shared ca certs ...
	I0731 20:59:06.441711  188266 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:06.441906  188266 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 20:59:06.441965  188266 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 20:59:06.441978  188266 certs.go:256] generating profile certs ...
	I0731 20:59:06.442080  188266 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/client.key
	I0731 20:59:06.442157  188266 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/apiserver.key.9cb12361
	I0731 20:59:06.442205  188266 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/proxy-client.key
	I0731 20:59:06.442354  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 20:59:06.442391  188266 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 20:59:06.442404  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 20:59:06.442447  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:59:06.442478  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:59:06.442522  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 20:59:06.442580  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:06.443470  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:59:06.497056  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:59:06.530978  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:59:06.574533  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 20:59:06.619523  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0731 20:59:06.648269  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 20:59:06.677824  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:59:06.704450  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 20:59:06.731606  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 20:59:06.756990  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 20:59:06.781214  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:59:06.804855  188266 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:59:06.821531  188266 ssh_runner.go:195] Run: openssl version
	I0731 20:59:06.827394  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 20:59:06.838680  188266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 20:59:06.843618  188266 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 20:59:06.843681  188266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 20:59:06.850238  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:59:06.865533  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:59:06.881516  188266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:06.886809  188266 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:06.886876  188266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:06.893345  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:59:06.908919  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 20:59:06.922150  188266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 20:59:06.927165  188266 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 20:59:06.927226  188266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 20:59:06.933724  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 20:59:06.946420  188266 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:59:06.951347  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 20:59:06.959595  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 20:59:06.967808  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 20:59:06.977083  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 20:59:06.985089  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 20:59:06.992190  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 20:59:06.998458  188266 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-125614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-125614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:59:06.998548  188266 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:59:06.998592  188266 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:07.053176  188266 cri.go:89] found id: ""
	I0731 20:59:07.053256  188266 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 20:59:07.064373  188266 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 20:59:07.064392  188266 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 20:59:07.064433  188266 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 20:59:07.075167  188266 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:59:07.076057  188266 kubeconfig.go:125] found "default-k8s-diff-port-125614" server: "https://192.168.50.221:8444"
	I0731 20:59:07.078091  188266 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 20:59:07.089136  188266 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.221
	I0731 20:59:07.089161  188266 kubeadm.go:1160] stopping kube-system containers ...
	I0731 20:59:07.089174  188266 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 20:59:07.089225  188266 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:07.133015  188266 cri.go:89] found id: ""
	I0731 20:59:07.133099  188266 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 20:59:07.155229  188266 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:59:07.166326  188266 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:59:07.166348  188266 kubeadm.go:157] found existing configuration files:
	
	I0731 20:59:07.166418  188266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0731 20:59:07.176709  188266 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:59:07.176768  188266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:59:07.187232  188266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0731 20:59:07.197376  188266 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:59:07.197453  188266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:59:07.209451  188266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0731 20:59:07.221141  188266 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:59:07.221205  188266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:59:07.232016  188266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0731 20:59:07.242340  188266 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:59:07.242402  188266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:59:07.253794  188266 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 20:59:07.264912  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:07.382193  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:08.445321  188266 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.063086935s)
	I0731 20:59:08.445364  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:08.664603  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:08.744053  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:08.857284  188266 api_server.go:52] waiting for apiserver process to appear ...
	I0731 20:59:08.857380  188266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:09.357505  188266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:09.857488  188266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:09.887329  188266 api_server.go:72] duration metric: took 1.030046485s to wait for apiserver process to appear ...
	I0731 20:59:09.887358  188266 api_server.go:88] waiting for apiserver healthz status ...
	I0731 20:59:09.887405  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:09.887966  188266 api_server.go:269] stopped: https://192.168.50.221:8444/healthz: Get "https://192.168.50.221:8444/healthz": dial tcp 192.168.50.221:8444: connect: connection refused
	I0731 20:59:10.387674  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:09.545937  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:09.546581  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:09.546605  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:09.546529  189574 retry.go:31] will retry after 1.934269586s: waiting for machine to come up
	I0731 20:59:11.482402  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:11.482794  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:11.482823  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:11.482744  189574 retry.go:31] will retry after 2.575131422s: waiting for machine to come up
	I0731 20:59:10.053236  188133 pod_ready.go:102] pod "coredns-5cfdc65f69-c9gcf" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:10.551437  188133 pod_ready.go:92] pod "coredns-5cfdc65f69-c9gcf" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:10.551467  188133 pod_ready.go:81] duration metric: took 5.006944467s for pod "coredns-5cfdc65f69-c9gcf" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:10.551480  188133 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:12.559346  188133 pod_ready.go:102] pod "etcd-no-preload-916885" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:12.827297  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:12.827342  188266 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:12.827390  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:12.883496  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:12.883538  188266 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:12.887715  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:12.902715  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:12.902746  188266 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:13.388340  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:13.392840  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:13.392872  188266 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:13.888510  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:13.894519  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:13.894553  188266 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:14.388177  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:14.392557  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 200:
	ok
	I0731 20:59:14.399285  188266 api_server.go:141] control plane version: v1.30.3
	I0731 20:59:14.399321  188266 api_server.go:131] duration metric: took 4.511955505s to wait for apiserver health ...
	I0731 20:59:14.399333  188266 cni.go:84] Creating CNI manager for ""
	I0731 20:59:14.399340  188266 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:14.400987  188266 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 20:59:14.401981  188266 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 20:59:14.420648  188266 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 20:59:14.441909  188266 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 20:59:14.451365  188266 system_pods.go:59] 8 kube-system pods found
	I0731 20:59:14.451406  188266 system_pods.go:61] "coredns-7db6d8ff4d-gnrgs" [203ddf96-11cf-4fd3-8920-aa787815ad1a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 20:59:14.451419  188266 system_pods.go:61] "etcd-default-k8s-diff-port-125614" [7b9a74f5-b8df-457d-b4be-26e3f268e74a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 20:59:14.451426  188266 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-125614" [a2db9a6a-1d7f-4bb6-9f54-2d74676bba7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 20:59:14.451432  188266 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-125614" [3cd52038-e450-46ea-91a7-494bdfeb386e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 20:59:14.451438  188266 system_pods.go:61] "kube-proxy-csdc4" [24077c7d-f54c-4a54-9791-742327f2a9d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 20:59:14.451444  188266 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-125614" [4b9b4f87-74a3-4768-b3ca-3226a4db7105] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 20:59:14.451461  188266 system_pods.go:61] "metrics-server-569cc877fc-jf52w" [00b07830-8180-43c0-83c7-e68d399ae0ef] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 20:59:14.451468  188266 system_pods.go:61] "storage-provisioner" [efc60c19-af1b-426e-82e2-5fb9a2d1fb3a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 20:59:14.451476  188266 system_pods.go:74] duration metric: took 9.546534ms to wait for pod list to return data ...
	I0731 20:59:14.451486  188266 node_conditions.go:102] verifying NodePressure condition ...
	I0731 20:59:14.454760  188266 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 20:59:14.454784  188266 node_conditions.go:123] node cpu capacity is 2
	I0731 20:59:14.454795  188266 node_conditions.go:105] duration metric: took 3.303087ms to run NodePressure ...
	I0731 20:59:14.454820  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:14.730635  188266 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 20:59:14.735144  188266 kubeadm.go:739] kubelet initialised
	I0731 20:59:14.735165  188266 kubeadm.go:740] duration metric: took 4.500388ms waiting for restarted kubelet to initialise ...
	I0731 20:59:14.735173  188266 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:59:14.742292  188266 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:14.749460  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.749486  188266 pod_ready.go:81] duration metric: took 7.166399ms for pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:14.749496  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.749504  188266 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:14.757068  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.757091  188266 pod_ready.go:81] duration metric: took 7.579526ms for pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:14.757101  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.757109  188266 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:14.762181  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.762203  188266 pod_ready.go:81] duration metric: took 5.083756ms for pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:14.762213  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.762219  188266 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:14.845070  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.845095  188266 pod_ready.go:81] duration metric: took 82.86894ms for pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:14.845107  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.845113  188266 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-csdc4" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:15.246100  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "kube-proxy-csdc4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:15.246131  188266 pod_ready.go:81] duration metric: took 401.011321ms for pod "kube-proxy-csdc4" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:15.246150  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "kube-proxy-csdc4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:15.246159  188266 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:15.645657  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:15.645689  188266 pod_ready.go:81] duration metric: took 399.519543ms for pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:15.645704  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:15.645713  188266 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:16.045744  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:16.045776  188266 pod_ready.go:81] duration metric: took 400.053102ms for pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:16.045791  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:16.045800  188266 pod_ready.go:38] duration metric: took 1.310615323s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:59:16.045838  188266 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 20:59:16.059046  188266 ops.go:34] apiserver oom_adj: -16
	I0731 20:59:16.059071  188266 kubeadm.go:597] duration metric: took 8.994671774s to restartPrimaryControlPlane
	I0731 20:59:16.059082  188266 kubeadm.go:394] duration metric: took 9.060633072s to StartCluster
	I0731 20:59:16.059104  188266 settings.go:142] acquiring lock: {Name:mk1f43113b3e416d694056e4890ac819b020378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:16.059181  188266 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 20:59:16.060895  188266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:16.061143  188266 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 20:59:16.061226  188266 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 20:59:16.061324  188266 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-125614"
	I0731 20:59:16.061386  188266 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-125614"
	W0731 20:59:16.061399  188266 addons.go:243] addon storage-provisioner should already be in state true
	I0731 20:59:16.061388  188266 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-125614"
	I0731 20:59:16.061400  188266 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-125614"
	I0731 20:59:16.061453  188266 config.go:182] Loaded profile config "default-k8s-diff-port-125614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:59:16.061495  188266 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-125614"
	W0731 20:59:16.061516  188266 addons.go:243] addon metrics-server should already be in state true
	I0731 20:59:16.061438  188266 host.go:66] Checking if "default-k8s-diff-port-125614" exists ...
	I0731 20:59:16.061603  188266 host.go:66] Checking if "default-k8s-diff-port-125614" exists ...
	I0731 20:59:16.061436  188266 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-125614"
	I0731 20:59:16.062072  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.062084  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.062085  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.062110  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.062127  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.062188  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.062822  188266 out.go:177] * Verifying Kubernetes components...
	I0731 20:59:16.064337  188266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:16.081194  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45591
	I0731 20:59:16.081208  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35871
	I0731 20:59:16.081197  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38383
	I0731 20:59:16.081872  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.081956  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.082026  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.082423  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.082439  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.082926  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.082951  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.083047  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.083058  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.083076  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.083712  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.083754  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.084871  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.085484  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.085734  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetState
	I0731 20:59:16.085815  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.085845  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.089827  188266 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-125614"
	W0731 20:59:16.089854  188266 addons.go:243] addon default-storageclass should already be in state true
	I0731 20:59:16.089884  188266 host.go:66] Checking if "default-k8s-diff-port-125614" exists ...
	I0731 20:59:16.090245  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.090301  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.106592  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38845
	I0731 20:59:16.106609  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34433
	I0731 20:59:16.108751  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.108849  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.109414  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.109442  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.109546  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.109576  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.109948  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.109953  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.110132  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetState
	I0731 20:59:16.110163  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetState
	I0731 20:59:16.111216  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36309
	I0731 20:59:16.111657  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.112217  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.112239  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.112319  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:16.113374  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.115608  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:16.115649  188266 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:59:16.115940  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.115979  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.116965  188266 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 20:59:16.117053  188266 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 20:59:16.117069  188266 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 20:59:16.117083  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:16.118247  188266 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 20:59:16.118268  188266 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 20:59:16.118288  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:16.120985  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.121540  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:16.121563  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.121764  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:16.121865  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.122099  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:16.122295  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:16.122371  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:16.122490  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.122552  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:16.122632  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:16.122850  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:16.123024  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:16.123218  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:16.133929  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34157
	I0731 20:59:16.134348  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.134844  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.134865  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.135175  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.135389  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetState
	I0731 20:59:16.136985  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:16.137272  188266 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 20:59:16.137287  188266 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 20:59:16.137313  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:16.140222  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.140543  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:16.140560  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:16.140762  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.140795  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:16.140969  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:16.141107  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:16.257677  188266 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:59:16.275791  188266 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-125614" to be "Ready" ...
	I0731 20:59:16.373528  188266 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 20:59:16.373552  188266 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 20:59:16.380797  188266 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 20:59:16.404028  188266 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 20:59:16.406072  188266 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 20:59:16.406098  188266 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 20:59:16.456003  188266 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 20:59:16.456030  188266 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 20:59:16.517304  188266 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 20:59:17.377438  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.377468  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.377514  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.377565  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.377765  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.377780  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.377790  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.377797  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.377827  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.377835  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.377930  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.378028  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.378028  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Closing plugin on server side
	I0731 20:59:17.378354  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Closing plugin on server side
	I0731 20:59:17.378417  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.378424  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.378569  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.378583  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.384110  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.384130  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.384325  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.384341  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.428457  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.428480  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.428766  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.428782  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.428790  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.428799  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.428804  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Closing plugin on server side
	I0731 20:59:17.429011  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.429024  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.429040  188266 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-125614"
	I0731 20:59:17.431884  188266 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 20:59:14.059385  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:14.059857  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:14.059879  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:14.059819  189574 retry.go:31] will retry after 3.127857327s: waiting for machine to come up
	I0731 20:59:17.189405  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:17.189871  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:17.189902  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:17.189821  189574 retry.go:31] will retry after 4.516767425s: waiting for machine to come up
	I0731 20:59:14.559493  188133 pod_ready.go:102] pod "etcd-no-preload-916885" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:16.561540  188133 pod_ready.go:92] pod "etcd-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:16.561568  188133 pod_ready.go:81] duration metric: took 6.010079286s for pod "etcd-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:16.561580  188133 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.068734  188133 pod_ready.go:92] pod "kube-apiserver-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:18.068756  188133 pod_ready.go:81] duration metric: took 1.507167128s for pod "kube-apiserver-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.068766  188133 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.073069  188133 pod_ready.go:92] pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:18.073086  188133 pod_ready.go:81] duration metric: took 4.313817ms for pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.073095  188133 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-99jgm" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.077480  188133 pod_ready.go:92] pod "kube-proxy-99jgm" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:18.077497  188133 pod_ready.go:81] duration metric: took 4.395483ms for pod "kube-proxy-99jgm" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.077506  188133 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.082197  188133 pod_ready.go:92] pod "kube-scheduler-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:18.082221  188133 pod_ready.go:81] duration metric: took 4.709042ms for pod "kube-scheduler-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.082234  188133 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:17.433072  188266 addons.go:510] duration metric: took 1.371850333s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 20:59:18.280135  188266 node_ready.go:53] node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:20.280881  188266 node_ready.go:53] node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:23.082812  187862 start.go:364] duration metric: took 58.27194035s to acquireMachinesLock for "embed-certs-831240"
	I0731 20:59:23.082866  187862 start.go:96] Skipping create...Using existing machine configuration
	I0731 20:59:23.082875  187862 fix.go:54] fixHost starting: 
	I0731 20:59:23.083267  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:23.083308  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:23.101291  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36679
	I0731 20:59:23.101826  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:23.102464  187862 main.go:141] libmachine: Using API Version  1
	I0731 20:59:23.102498  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:23.102817  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:23.103024  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:23.103187  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetState
	I0731 20:59:23.105117  187862 fix.go:112] recreateIfNeeded on embed-certs-831240: state=Stopped err=<nil>
	I0731 20:59:23.105143  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	W0731 20:59:23.105307  187862 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 20:59:23.106919  187862 out.go:177] * Restarting existing kvm2 VM for "embed-certs-831240" ...
	I0731 20:59:21.708296  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.708811  188656 main.go:141] libmachine: (old-k8s-version-239115) Found IP for machine: 192.168.61.51
	I0731 20:59:21.708846  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has current primary IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.708860  188656 main.go:141] libmachine: (old-k8s-version-239115) Reserving static IP address...
	I0731 20:59:21.709432  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "old-k8s-version-239115", mac: "52:54:00:5a:70:0d", ip: "192.168.61.51"} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.709663  188656 main.go:141] libmachine: (old-k8s-version-239115) Reserved static IP address: 192.168.61.51
	I0731 20:59:21.709695  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | skip adding static IP to network mk-old-k8s-version-239115 - found existing host DHCP lease matching {name: "old-k8s-version-239115", mac: "52:54:00:5a:70:0d", ip: "192.168.61.51"}
	I0731 20:59:21.709711  188656 main.go:141] libmachine: (old-k8s-version-239115) Waiting for SSH to be available...
	I0731 20:59:21.709723  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | Getting to WaitForSSH function...
	I0731 20:59:21.711911  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.712310  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.712345  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.712517  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | Using SSH client type: external
	I0731 20:59:21.712540  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa (-rw-------)
	I0731 20:59:21.712581  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:59:21.712598  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | About to run SSH command:
	I0731 20:59:21.712625  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | exit 0
	I0731 20:59:21.838026  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | SSH cmd err, output: <nil>: 
	I0731 20:59:21.838370  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetConfigRaw
	I0731 20:59:21.839169  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetIP
	I0731 20:59:21.842168  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.842588  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.842623  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.842866  188656 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/config.json ...
	I0731 20:59:21.843126  188656 machine.go:94] provisionDockerMachine start ...
	I0731 20:59:21.843150  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:21.843388  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:21.846148  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.846657  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.846686  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.846993  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:21.847165  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:21.847360  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:21.847530  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:21.847707  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:21.847938  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:21.847951  188656 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 20:59:21.955109  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 20:59:21.955143  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetMachineName
	I0731 20:59:21.955460  188656 buildroot.go:166] provisioning hostname "old-k8s-version-239115"
	I0731 20:59:21.955492  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetMachineName
	I0731 20:59:21.955728  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:21.958752  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.959146  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.959176  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.959395  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:21.959620  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:21.959781  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:21.959918  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:21.960078  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:21.960358  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:21.960378  188656 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-239115 && echo "old-k8s-version-239115" | sudo tee /etc/hostname
	I0731 20:59:22.090625  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-239115
	
	I0731 20:59:22.090665  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.093927  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.094356  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.094387  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.094729  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.094942  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.095153  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.095364  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.095583  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:22.095819  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:22.095845  188656 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-239115' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-239115/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-239115' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:59:22.217153  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:59:22.217189  188656 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 20:59:22.217215  188656 buildroot.go:174] setting up certificates
	I0731 20:59:22.217229  188656 provision.go:84] configureAuth start
	I0731 20:59:22.217242  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetMachineName
	I0731 20:59:22.217613  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetIP
	I0731 20:59:22.220640  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.221082  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.221125  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.221237  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.223811  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.224152  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.224180  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.224337  188656 provision.go:143] copyHostCerts
	I0731 20:59:22.224405  188656 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 20:59:22.224418  188656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 20:59:22.224485  188656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 20:59:22.224604  188656 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 20:59:22.224616  188656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 20:59:22.224654  188656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 20:59:22.224729  188656 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 20:59:22.224740  188656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 20:59:22.224766  188656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 20:59:22.224833  188656 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-239115 san=[127.0.0.1 192.168.61.51 localhost minikube old-k8s-version-239115]
	I0731 20:59:22.407532  188656 provision.go:177] copyRemoteCerts
	I0731 20:59:22.407599  188656 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:59:22.407625  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.410594  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.411007  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.411033  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.411338  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.411582  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.411811  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.412007  188656 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa Username:docker}
	I0731 20:59:22.492781  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:59:22.518278  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0731 20:59:22.543018  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 20:59:22.568888  188656 provision.go:87] duration metric: took 351.643ms to configureAuth
	I0731 20:59:22.568920  188656 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:59:22.569099  188656 config.go:182] Loaded profile config "old-k8s-version-239115": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 20:59:22.569169  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.572154  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.572471  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.572500  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.572669  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.572872  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.572993  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.573112  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.573249  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:22.573481  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:22.573512  188656 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:59:22.847156  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:59:22.847193  188656 machine.go:97] duration metric: took 1.004049055s to provisionDockerMachine
	I0731 20:59:22.847211  188656 start.go:293] postStartSetup for "old-k8s-version-239115" (driver="kvm2")
	I0731 20:59:22.847229  188656 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:59:22.847284  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:22.847710  188656 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:59:22.847741  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.850515  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.850935  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.850962  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.851088  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.851288  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.851524  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.851674  188656 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa Username:docker}
	I0731 20:59:22.932316  188656 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:59:22.936672  188656 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:59:22.936707  188656 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 20:59:22.936792  188656 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 20:59:22.936894  188656 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 20:59:22.937011  188656 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:59:22.946454  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:22.972952  188656 start.go:296] duration metric: took 125.72216ms for postStartSetup
	I0731 20:59:22.972996  188656 fix.go:56] duration metric: took 22.554695114s for fixHost
	I0731 20:59:22.973026  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.975758  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.976166  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.976198  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.976320  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.976585  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.976782  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.976966  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.977115  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:22.977275  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:22.977284  188656 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:59:23.082657  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722459563.026856067
	
	I0731 20:59:23.082683  188656 fix.go:216] guest clock: 1722459563.026856067
	I0731 20:59:23.082694  188656 fix.go:229] Guest: 2024-07-31 20:59:23.026856067 +0000 UTC Remote: 2024-07-31 20:59:22.973000729 +0000 UTC m=+249.171273714 (delta=53.855338ms)
	I0731 20:59:23.082721  188656 fix.go:200] guest clock delta is within tolerance: 53.855338ms
	I0731 20:59:23.082727  188656 start.go:83] releasing machines lock for "old-k8s-version-239115", held for 22.664459101s
	I0731 20:59:23.082752  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:23.083052  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetIP
	I0731 20:59:23.086626  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.087093  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:23.087135  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.087366  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:23.087954  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:23.088159  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:23.088251  188656 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:59:23.088303  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:23.088370  188656 ssh_runner.go:195] Run: cat /version.json
	I0731 20:59:23.088392  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:23.091710  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.091989  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.092073  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:23.092101  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.092227  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:23.092429  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:23.092472  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:23.092520  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.092618  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:23.092752  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:23.092803  188656 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa Username:docker}
	I0731 20:59:23.092931  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:23.093100  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:23.093255  188656 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa Username:docker}
	I0731 20:59:23.175012  188656 ssh_runner.go:195] Run: systemctl --version
	I0731 20:59:23.200192  188656 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:59:23.348227  188656 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:59:23.355109  188656 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:59:23.355195  188656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:59:23.371683  188656 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:59:23.371707  188656 start.go:495] detecting cgroup driver to use...
	I0731 20:59:23.371786  188656 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:59:23.388727  188656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:59:23.408830  188656 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:59:23.408907  188656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:59:23.423594  188656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:59:23.437876  188656 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:59:23.559105  188656 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:59:23.743186  188656 docker.go:233] disabling docker service ...
	I0731 20:59:23.743253  188656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:59:23.758053  188656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:59:23.779951  188656 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:59:20.089173  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:22.092138  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:23.919494  188656 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:59:24.057230  188656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:59:24.072687  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:59:24.094528  188656 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 20:59:24.094600  188656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:24.106579  188656 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:59:24.106634  188656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:24.120079  188656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:24.130759  188656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:24.142925  188656 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:59:24.154760  188656 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:59:24.165059  188656 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:59:24.165113  188656 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:59:24.179567  188656 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:59:24.191838  188656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:24.339078  188656 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:59:24.515723  188656 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:59:24.515810  188656 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:59:24.521882  188656 start.go:563] Will wait 60s for crictl version
	I0731 20:59:24.521966  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:24.527655  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:59:24.581055  188656 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:59:24.581151  188656 ssh_runner.go:195] Run: crio --version
	I0731 20:59:24.623207  188656 ssh_runner.go:195] Run: crio --version
	I0731 20:59:24.662956  188656 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0731 20:59:22.780311  188266 node_ready.go:53] node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:23.281324  188266 node_ready.go:49] node "default-k8s-diff-port-125614" has status "Ready":"True"
	I0731 20:59:23.281373  188266 node_ready.go:38] duration metric: took 7.005540469s for node "default-k8s-diff-port-125614" to be "Ready" ...
	I0731 20:59:23.281387  188266 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:59:23.291207  188266 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.299173  188266 pod_ready.go:92] pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:23.299202  188266 pod_ready.go:81] duration metric: took 7.971632ms for pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.299215  188266 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.307561  188266 pod_ready.go:92] pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:23.307580  188266 pod_ready.go:81] duration metric: took 8.357239ms for pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.307589  188266 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.314466  188266 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:23.314544  188266 pod_ready.go:81] duration metric: took 6.946044ms for pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.314565  188266 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:25.323341  188266 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:23.108292  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Start
	I0731 20:59:23.108473  187862 main.go:141] libmachine: (embed-certs-831240) Ensuring networks are active...
	I0731 20:59:23.109160  187862 main.go:141] libmachine: (embed-certs-831240) Ensuring network default is active
	I0731 20:59:23.109575  187862 main.go:141] libmachine: (embed-certs-831240) Ensuring network mk-embed-certs-831240 is active
	I0731 20:59:23.110032  187862 main.go:141] libmachine: (embed-certs-831240) Getting domain xml...
	I0731 20:59:23.110762  187862 main.go:141] libmachine: (embed-certs-831240) Creating domain...
	I0731 20:59:24.457926  187862 main.go:141] libmachine: (embed-certs-831240) Waiting to get IP...
	I0731 20:59:24.458936  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:24.459381  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:24.459477  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:24.459375  189758 retry.go:31] will retry after 266.695372ms: waiting for machine to come up
	I0731 20:59:24.727938  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:24.728394  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:24.728532  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:24.728451  189758 retry.go:31] will retry after 349.84093ms: waiting for machine to come up
	I0731 20:59:25.080044  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:25.080634  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:25.080668  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:25.080592  189758 retry.go:31] will retry after 324.555122ms: waiting for machine to come up
	I0731 20:59:25.407332  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:25.407852  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:25.407877  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:25.407795  189758 retry.go:31] will retry after 580.815897ms: waiting for machine to come up
	I0731 20:59:25.990957  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:25.991551  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:25.991578  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:25.991468  189758 retry.go:31] will retry after 570.045476ms: waiting for machine to come up
	I0731 20:59:26.563493  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:26.563901  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:26.563931  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:26.563853  189758 retry.go:31] will retry after 582.597352ms: waiting for machine to come up
	I0731 20:59:27.148256  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:27.148744  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:27.148773  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:27.148688  189758 retry.go:31] will retry after 1.105713474s: waiting for machine to come up
	I0731 20:59:24.664851  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetIP
	I0731 20:59:24.668464  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:24.668842  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:24.668869  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:24.669103  188656 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0731 20:59:24.674448  188656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:24.690857  188656 kubeadm.go:883] updating cluster {Name:old-k8s-version-239115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-239115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:59:24.691011  188656 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 20:59:24.691056  188656 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:24.744259  188656 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 20:59:24.744348  188656 ssh_runner.go:195] Run: which lz4
	I0731 20:59:24.749358  188656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 20:59:24.754299  188656 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 20:59:24.754341  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0731 20:59:26.551495  188656 crio.go:462] duration metric: took 1.802206904s to copy over tarball
	I0731 20:59:26.551571  188656 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 20:59:24.589677  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:26.591079  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:29.089923  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:25.824008  188266 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:25.824037  188266 pod_ready.go:81] duration metric: took 2.509461823s for pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:25.824052  188266 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-csdc4" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:25.840569  188266 pod_ready.go:92] pod "kube-proxy-csdc4" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:25.840595  188266 pod_ready.go:81] duration metric: took 16.533543ms for pod "kube-proxy-csdc4" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:25.840613  188266 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:26.103726  188266 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:26.103759  188266 pod_ready.go:81] duration metric: took 263.1364ms for pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:26.103774  188266 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:28.112583  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:30.610462  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:28.255818  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:28.256478  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:28.256506  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:28.256408  189758 retry.go:31] will retry after 1.3552249s: waiting for machine to come up
	I0731 20:59:29.613070  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:29.613661  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:29.613693  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:29.613620  189758 retry.go:31] will retry after 1.522319436s: waiting for machine to come up
	I0731 20:59:31.138020  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:31.138490  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:31.138522  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:31.138434  189758 retry.go:31] will retry after 1.573723862s: waiting for machine to come up
	I0731 20:59:29.653941  188656 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.102337952s)
	I0731 20:59:29.653974  188656 crio.go:469] duration metric: took 3.102444338s to extract the tarball
	I0731 20:59:29.653982  188656 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 20:59:29.704065  188656 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:29.745966  188656 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 20:59:29.746010  188656 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 20:59:29.746076  188656 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:59:29.746107  188656 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:29.746129  188656 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:29.746149  188656 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0731 20:59:29.746170  188656 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0731 20:59:29.746410  188656 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:29.746423  188656 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:29.746735  188656 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:29.747951  188656 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 20:59:29.747978  188656 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0731 20:59:29.747978  188656 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:29.747998  188656 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:29.748005  188656 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:29.747951  188656 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:59:29.748021  188656 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:29.748091  188656 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:29.915865  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:29.918049  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:29.950840  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:29.952762  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:29.956317  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:29.959905  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0731 20:59:30.000707  188656 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0731 20:59:30.000768  188656 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:30.000821  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.007207  188656 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0731 20:59:30.007251  188656 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:30.007294  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.016613  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0731 20:59:30.082306  188656 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0731 20:59:30.082358  188656 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:30.082364  188656 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0731 20:59:30.082414  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.082418  188656 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:30.082557  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.089299  188656 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0731 20:59:30.089382  188656 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:30.089427  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.105150  188656 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0731 20:59:30.105217  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:30.105246  188656 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0731 20:59:30.105264  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:30.105282  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.129702  188656 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0731 20:59:30.129748  188656 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0731 20:59:30.129779  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:30.129826  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:30.129853  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:30.129800  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.188192  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 20:59:30.188243  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0731 20:59:30.188342  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0731 20:59:30.188365  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0731 20:59:30.268231  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0731 20:59:30.268296  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0731 20:59:30.268337  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0731 20:59:30.287822  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0731 20:59:30.287929  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0731 20:59:30.635440  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:59:30.776879  188656 cache_images.go:92] duration metric: took 1.030849977s to LoadCachedImages
	W0731 20:59:30.777006  188656 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0731 20:59:30.777028  188656 kubeadm.go:934] updating node { 192.168.61.51 8443 v1.20.0 crio true true} ...
	I0731 20:59:30.777175  188656 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-239115 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:59:30.777284  188656 ssh_runner.go:195] Run: crio config
	I0731 20:59:30.832542  188656 cni.go:84] Creating CNI manager for ""
	I0731 20:59:30.832570  188656 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:30.832586  188656 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:59:30.832618  188656 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-239115 NodeName:old-k8s-version-239115 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 20:59:30.832798  188656 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-239115"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:59:30.832877  188656 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0731 20:59:30.842909  188656 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:59:30.842995  188656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 20:59:30.852951  188656 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0731 20:59:30.872643  188656 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:59:30.889851  188656 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0731 20:59:30.910958  188656 ssh_runner.go:195] Run: grep 192.168.61.51	control-plane.minikube.internal$ /etc/hosts
	I0731 20:59:30.915645  188656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:30.928698  188656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:31.055628  188656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:59:31.076731  188656 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115 for IP: 192.168.61.51
	I0731 20:59:31.076759  188656 certs.go:194] generating shared ca certs ...
	I0731 20:59:31.076789  188656 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:31.076979  188656 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 20:59:31.077041  188656 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 20:59:31.077057  188656 certs.go:256] generating profile certs ...
	I0731 20:59:31.077175  188656 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/client.key
	I0731 20:59:31.077378  188656 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/apiserver.key.072d7f83
	I0731 20:59:31.077514  188656 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/proxy-client.key
	I0731 20:59:31.077704  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 20:59:31.077789  188656 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 20:59:31.077806  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 20:59:31.077854  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:59:31.077892  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:59:31.077932  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 20:59:31.077997  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:31.078906  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:59:31.126980  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:59:31.167327  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:59:31.211947  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 20:59:31.258307  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 20:59:31.296628  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 20:59:31.342330  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:59:31.391114  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 20:59:31.415097  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 20:59:31.442595  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 20:59:31.472160  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:59:31.497814  188656 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:59:31.515890  188656 ssh_runner.go:195] Run: openssl version
	I0731 20:59:31.523423  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:59:31.537984  188656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:31.544161  188656 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:31.544225  188656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:31.552590  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:59:31.567190  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 20:59:31.581206  188656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 20:59:31.586903  188656 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 20:59:31.586966  188656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 20:59:31.593485  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 20:59:31.606764  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 20:59:31.619748  188656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 20:59:31.624599  188656 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 20:59:31.624681  188656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 20:59:31.631293  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:59:31.642823  188656 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:59:31.647273  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 20:59:31.653142  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 20:59:31.659046  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 20:59:31.665552  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 20:59:31.671454  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 20:59:31.677426  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 20:59:31.683490  188656 kubeadm.go:392] StartCluster: {Name:old-k8s-version-239115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-239115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:59:31.683586  188656 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:59:31.683625  188656 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:31.725466  188656 cri.go:89] found id: ""
	I0731 20:59:31.725548  188656 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 20:59:31.737025  188656 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 20:59:31.737050  188656 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 20:59:31.737113  188656 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 20:59:31.747325  188656 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:59:31.748325  188656 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-239115" does not appear in /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 20:59:31.748965  188656 kubeconfig.go:62] /home/jenkins/minikube-integration/19355-121704/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-239115" cluster setting kubeconfig missing "old-k8s-version-239115" context setting]
	I0731 20:59:31.749997  188656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:31.757569  188656 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 20:59:31.771188  188656 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.51
	I0731 20:59:31.771222  188656 kubeadm.go:1160] stopping kube-system containers ...
	I0731 20:59:31.771236  188656 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 20:59:31.771292  188656 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:31.811574  188656 cri.go:89] found id: ""
	I0731 20:59:31.811653  188656 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 20:59:31.829930  188656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:59:31.840145  188656 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:59:31.840165  188656 kubeadm.go:157] found existing configuration files:
	
	I0731 20:59:31.840206  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 20:59:31.851266  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:59:31.851340  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:59:31.861634  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 20:59:31.871532  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:59:31.871605  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:59:31.882164  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 20:59:31.892222  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:59:31.892291  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:59:31.903299  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 20:59:31.916163  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:59:31.916235  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:59:31.929423  188656 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 20:59:31.942668  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:32.107220  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:32.953249  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:33.207806  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:33.307640  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:33.410338  188656 api_server.go:52] waiting for apiserver process to appear ...
	I0731 20:59:33.410444  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:31.221009  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:33.589275  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:32.612024  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:35.109601  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:32.713632  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:32.714137  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:32.714169  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:32.714064  189758 retry.go:31] will retry after 2.013485748s: waiting for machine to come up
	I0731 20:59:34.729625  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:34.730006  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:34.730070  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:34.729970  189758 retry.go:31] will retry after 2.193072749s: waiting for machine to come up
	I0731 20:59:36.924345  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:36.924990  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:36.925008  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:36.924940  189758 retry.go:31] will retry after 3.394781674s: waiting for machine to come up
	I0731 20:59:33.910958  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:34.411011  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:34.911110  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:35.410715  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:35.911117  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:36.410825  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:36.911311  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:37.410757  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:37.910786  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:38.410821  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:36.089622  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:38.589435  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:37.110446  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:39.111323  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:40.322463  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:40.322827  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:40.322857  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:40.322774  189758 retry.go:31] will retry after 3.836613891s: waiting for machine to come up
	I0731 20:59:38.910891  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:39.411547  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:39.911260  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:40.411404  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:40.910719  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:41.411449  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:41.910643  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:42.410967  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:42.910703  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:43.411187  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:41.088768  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:43.589256  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:41.609891  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:44.111379  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:44.160516  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.161009  187862 main.go:141] libmachine: (embed-certs-831240) Found IP for machine: 192.168.39.92
	I0731 20:59:44.161029  187862 main.go:141] libmachine: (embed-certs-831240) Reserving static IP address...
	I0731 20:59:44.161041  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has current primary IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.161561  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "embed-certs-831240", mac: "52:54:00:ff:69:a6", ip: "192.168.39.92"} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.161594  187862 main.go:141] libmachine: (embed-certs-831240) DBG | skip adding static IP to network mk-embed-certs-831240 - found existing host DHCP lease matching {name: "embed-certs-831240", mac: "52:54:00:ff:69:a6", ip: "192.168.39.92"}
	I0731 20:59:44.161609  187862 main.go:141] libmachine: (embed-certs-831240) Reserved static IP address: 192.168.39.92
	I0731 20:59:44.161623  187862 main.go:141] libmachine: (embed-certs-831240) Waiting for SSH to be available...
	I0731 20:59:44.161638  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Getting to WaitForSSH function...
	I0731 20:59:44.163936  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.164285  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.164318  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.164447  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Using SSH client type: external
	I0731 20:59:44.164479  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa (-rw-------)
	I0731 20:59:44.164499  187862 main.go:141] libmachine: (embed-certs-831240) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.92 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:59:44.164510  187862 main.go:141] libmachine: (embed-certs-831240) DBG | About to run SSH command:
	I0731 20:59:44.164544  187862 main.go:141] libmachine: (embed-certs-831240) DBG | exit 0
	I0731 20:59:44.293463  187862 main.go:141] libmachine: (embed-certs-831240) DBG | SSH cmd err, output: <nil>: 
	I0731 20:59:44.293819  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetConfigRaw
	I0731 20:59:44.294490  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetIP
	I0731 20:59:44.296982  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.297351  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.297381  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.297634  187862 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/config.json ...
	I0731 20:59:44.297877  187862 machine.go:94] provisionDockerMachine start ...
	I0731 20:59:44.297897  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:44.298116  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.300452  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.300806  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.300829  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.300953  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:44.301146  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.301308  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.301439  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:44.301634  187862 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:44.301811  187862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0731 20:59:44.301823  187862 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 20:59:44.418065  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 20:59:44.418105  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetMachineName
	I0731 20:59:44.418428  187862 buildroot.go:166] provisioning hostname "embed-certs-831240"
	I0731 20:59:44.418446  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetMachineName
	I0731 20:59:44.418666  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.421984  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.422403  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.422434  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.422568  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:44.422733  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.422893  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.423023  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:44.423208  187862 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:44.423371  187862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0731 20:59:44.423410  187862 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-831240 && echo "embed-certs-831240" | sudo tee /etc/hostname
	I0731 20:59:44.549670  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-831240
	
	I0731 20:59:44.549697  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.552503  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.552851  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.552876  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.553017  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:44.553200  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.553398  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.553533  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:44.553721  187862 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:44.554012  187862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0731 20:59:44.554039  187862 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-831240' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-831240/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-831240' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:59:44.674662  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:59:44.674693  187862 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 20:59:44.674713  187862 buildroot.go:174] setting up certificates
	I0731 20:59:44.674723  187862 provision.go:84] configureAuth start
	I0731 20:59:44.674733  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetMachineName
	I0731 20:59:44.675011  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetIP
	I0731 20:59:44.677631  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.677911  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.677951  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.678081  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.679869  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.680177  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.680205  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.680332  187862 provision.go:143] copyHostCerts
	I0731 20:59:44.680391  187862 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 20:59:44.680401  187862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 20:59:44.680450  187862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 20:59:44.680537  187862 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 20:59:44.680545  187862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 20:59:44.680564  187862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 20:59:44.680628  187862 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 20:59:44.680635  187862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 20:59:44.680652  187862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 20:59:44.680711  187862 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.embed-certs-831240 san=[127.0.0.1 192.168.39.92 embed-certs-831240 localhost minikube]
	I0731 20:59:44.733872  187862 provision.go:177] copyRemoteCerts
	I0731 20:59:44.733927  187862 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:59:44.733951  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.736399  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.736731  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.736758  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.736935  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:44.737131  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.737273  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:44.737430  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 20:59:44.824050  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:59:44.847699  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0731 20:59:44.872138  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 20:59:44.896013  187862 provision.go:87] duration metric: took 221.275458ms to configureAuth
	I0731 20:59:44.896042  187862 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:59:44.896234  187862 config.go:182] Loaded profile config "embed-certs-831240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:59:44.896327  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.898820  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.899206  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.899232  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.899457  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:44.899660  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.899822  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.899993  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:44.900216  187862 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:44.900438  187862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0731 20:59:44.900462  187862 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:59:45.179165  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:59:45.179194  187862 machine.go:97] duration metric: took 881.302407ms to provisionDockerMachine
	I0731 20:59:45.179213  187862 start.go:293] postStartSetup for "embed-certs-831240" (driver="kvm2")
	I0731 20:59:45.179226  187862 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:59:45.179252  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:45.179615  187862 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:59:45.179646  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:45.182617  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.183047  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:45.183069  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.183284  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:45.183510  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:45.183654  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:45.183805  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 20:59:45.273492  187862 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:59:45.277593  187862 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:59:45.277618  187862 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 20:59:45.277687  187862 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 20:59:45.277782  187862 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 20:59:45.277889  187862 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:59:45.288172  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:45.311763  187862 start.go:296] duration metric: took 132.534326ms for postStartSetup
	I0731 20:59:45.311803  187862 fix.go:56] duration metric: took 22.228928797s for fixHost
	I0731 20:59:45.311827  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:45.314578  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.314962  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:45.314998  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.315146  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:45.315381  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:45.315549  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:45.315681  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:45.315868  187862 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:45.316035  187862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0731 20:59:45.316045  187862 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:59:45.426289  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722459585.381297707
	
	I0731 20:59:45.426314  187862 fix.go:216] guest clock: 1722459585.381297707
	I0731 20:59:45.426324  187862 fix.go:229] Guest: 2024-07-31 20:59:45.381297707 +0000 UTC Remote: 2024-07-31 20:59:45.311808006 +0000 UTC m=+363.090091892 (delta=69.489701ms)
	I0731 20:59:45.426379  187862 fix.go:200] guest clock delta is within tolerance: 69.489701ms
	I0731 20:59:45.426387  187862 start.go:83] releasing machines lock for "embed-certs-831240", held for 22.343543995s
	I0731 20:59:45.426419  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:45.426684  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetIP
	I0731 20:59:45.429330  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.429757  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:45.429785  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.429952  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:45.430453  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:45.430671  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:45.430790  187862 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:59:45.430854  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:45.430905  187862 ssh_runner.go:195] Run: cat /version.json
	I0731 20:59:45.430943  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:45.433850  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.434108  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.434192  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:45.434222  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.434385  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:45.434580  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:45.434584  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:45.434611  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.434760  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:45.434768  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:45.434939  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:45.434929  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 20:59:45.435099  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:45.435243  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 20:59:45.542122  187862 ssh_runner.go:195] Run: systemctl --version
	I0731 20:59:45.548583  187862 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:59:45.690235  187862 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:59:45.696897  187862 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:59:45.696986  187862 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:59:45.714456  187862 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:59:45.714480  187862 start.go:495] detecting cgroup driver to use...
	I0731 20:59:45.714546  187862 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:59:45.732184  187862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:59:45.747047  187862 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:59:45.747104  187862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:59:45.761152  187862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:59:45.775267  187862 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:59:45.890891  187862 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:59:46.043503  187862 docker.go:233] disabling docker service ...
	I0731 20:59:46.043577  187862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:59:46.058174  187862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:59:46.070900  187862 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:59:46.209527  187862 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:59:46.343868  187862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:59:46.357583  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:59:46.375819  187862 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 20:59:46.375875  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.386762  187862 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:59:46.386844  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.397495  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.407654  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.418326  187862 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:59:46.428983  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.439530  187862 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.457956  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.468003  187862 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:59:46.477332  187862 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:59:46.477400  187862 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:59:46.490886  187862 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:59:46.500516  187862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:46.617952  187862 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:59:46.761978  187862 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:59:46.762088  187862 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:59:46.767210  187862 start.go:563] Will wait 60s for crictl version
	I0731 20:59:46.767275  187862 ssh_runner.go:195] Run: which crictl
	I0731 20:59:46.771502  187862 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:59:46.810894  187862 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:59:46.810976  187862 ssh_runner.go:195] Run: crio --version
	I0731 20:59:46.839234  187862 ssh_runner.go:195] Run: crio --version
	I0731 20:59:46.871209  187862 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 20:59:46.872648  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetIP
	I0731 20:59:46.875374  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:46.875683  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:46.875698  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:46.875900  187862 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 20:59:46.880402  187862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:46.894098  187862 kubeadm.go:883] updating cluster {Name:embed-certs-831240 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-831240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:59:46.894238  187862 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:59:46.894300  187862 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:46.937003  187862 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 20:59:46.937079  187862 ssh_runner.go:195] Run: which lz4
	I0731 20:59:46.941158  187862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 20:59:46.945395  187862 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 20:59:46.945425  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 20:59:43.910997  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:44.410783  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:44.911365  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:45.410690  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:45.911150  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:46.411384  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:46.910579  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:47.411171  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:47.910578  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:48.411377  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:45.589690  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:47.591464  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:46.608955  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:48.611634  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:50.615557  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:48.414703  187862 crio.go:462] duration metric: took 1.473569222s to copy over tarball
	I0731 20:59:48.414789  187862 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 20:59:50.666750  187862 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.251926888s)
	I0731 20:59:50.666783  187862 crio.go:469] duration metric: took 2.252043688s to extract the tarball
	I0731 20:59:50.666793  187862 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 20:59:50.707188  187862 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:50.749781  187862 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 20:59:50.749808  187862 cache_images.go:84] Images are preloaded, skipping loading
	I0731 20:59:50.749817  187862 kubeadm.go:934] updating node { 192.168.39.92 8443 v1.30.3 crio true true} ...
	I0731 20:59:50.749923  187862 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-831240 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-831240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:59:50.749998  187862 ssh_runner.go:195] Run: crio config
	I0731 20:59:50.797191  187862 cni.go:84] Creating CNI manager for ""
	I0731 20:59:50.797214  187862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:50.797227  187862 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:59:50.797253  187862 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.92 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-831240 NodeName:embed-certs-831240 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.92"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.92 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 20:59:50.797484  187862 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.92
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-831240"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.92
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.92"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:59:50.797556  187862 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 20:59:50.808170  187862 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:59:50.808236  187862 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 20:59:50.817847  187862 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0731 20:59:50.834107  187862 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:59:50.849722  187862 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0731 20:59:50.866599  187862 ssh_runner.go:195] Run: grep 192.168.39.92	control-plane.minikube.internal$ /etc/hosts
	I0731 20:59:50.870727  187862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.92	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:50.884490  187862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:51.043488  187862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:59:51.064792  187862 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240 for IP: 192.168.39.92
	I0731 20:59:51.064816  187862 certs.go:194] generating shared ca certs ...
	I0731 20:59:51.064836  187862 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:51.065142  187862 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 20:59:51.065225  187862 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 20:59:51.065254  187862 certs.go:256] generating profile certs ...
	I0731 20:59:51.065443  187862 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/client.key
	I0731 20:59:51.065571  187862 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/apiserver.key.4e545c52
	I0731 20:59:51.065639  187862 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/proxy-client.key
	I0731 20:59:51.065798  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 20:59:51.065846  187862 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 20:59:51.065857  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 20:59:51.065883  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:59:51.065909  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:59:51.065929  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 20:59:51.065971  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:51.066633  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:59:51.107287  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:59:51.138745  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:59:51.176139  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 20:59:51.211344  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0731 20:59:51.241050  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 20:59:51.269307  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:59:51.293184  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 20:59:51.316745  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:59:51.343620  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 20:59:51.367293  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 20:59:51.391789  187862 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:59:51.413821  187862 ssh_runner.go:195] Run: openssl version
	I0731 20:59:51.420455  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 20:59:51.431721  187862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 20:59:51.436672  187862 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 20:59:51.436724  187862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 20:59:51.442604  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 20:59:51.453601  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 20:59:51.464109  187862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 20:59:51.468598  187862 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 20:59:51.468648  187862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 20:59:51.474333  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:59:51.484758  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:59:51.495093  187862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:51.499557  187862 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:51.499605  187862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:51.505244  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:59:51.515545  187862 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:59:51.519923  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 20:59:51.525696  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 20:59:51.531430  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 20:59:51.537082  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 20:59:51.542713  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 20:59:51.548206  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 20:59:51.553705  187862 kubeadm.go:392] StartCluster: {Name:embed-certs-831240 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-831240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:59:51.553793  187862 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:59:51.553841  187862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:51.592396  187862 cri.go:89] found id: ""
	I0731 20:59:51.592472  187862 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 20:59:51.602510  187862 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 20:59:51.602528  187862 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 20:59:51.602578  187862 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 20:59:51.612384  187862 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:59:51.613530  187862 kubeconfig.go:125] found "embed-certs-831240" server: "https://192.168.39.92:8443"
	I0731 20:59:51.615991  187862 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 20:59:51.625205  187862 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.92
	I0731 20:59:51.625239  187862 kubeadm.go:1160] stopping kube-system containers ...
	I0731 20:59:51.625253  187862 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 20:59:51.625307  187862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:51.663278  187862 cri.go:89] found id: ""
	I0731 20:59:51.663370  187862 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 20:59:51.678876  187862 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:59:51.688071  187862 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:59:51.688092  187862 kubeadm.go:157] found existing configuration files:
	
	I0731 20:59:51.688139  187862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 20:59:51.696441  187862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:59:51.696494  187862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:59:51.705310  187862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 20:59:51.713545  187862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:59:51.713599  187862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:59:51.723512  187862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 20:59:51.732304  187862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:59:51.732380  187862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:59:51.741301  187862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 20:59:51.749537  187862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:59:51.749583  187862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:59:51.758609  187862 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 20:59:51.774450  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:51.888916  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:48.910784  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:49.411137  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:49.911453  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:50.411128  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:50.911431  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:51.410483  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:51.910975  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:52.411519  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:52.911079  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:53.410802  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:50.094603  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:52.589951  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:53.424691  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:55.609675  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:52.666705  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:52.899759  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:52.975806  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:53.050422  187862 api_server.go:52] waiting for apiserver process to appear ...
	I0731 20:59:53.050493  187862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:53.551073  187862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:54.051427  187862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:54.551268  187862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:54.570361  187862 api_server.go:72] duration metric: took 1.519937245s to wait for apiserver process to appear ...
	I0731 20:59:54.570389  187862 api_server.go:88] waiting for apiserver healthz status ...
	I0731 20:59:54.570414  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:53.911405  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:54.410870  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:54.911330  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:55.411491  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:55.911380  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:56.411483  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:56.910602  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:57.411228  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:57.910486  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:58.411198  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:57.260421  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:57.260455  187862 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:57.260469  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:57.284265  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:57.284301  187862 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:57.570976  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:57.575616  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:57.575644  187862 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:58.071247  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:58.075871  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:58.075903  187862 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:58.570906  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:58.581990  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:58.582038  187862 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:59.070528  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:59.074787  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 200:
	ok
	I0731 20:59:59.081502  187862 api_server.go:141] control plane version: v1.30.3
	I0731 20:59:59.081541  187862 api_server.go:131] duration metric: took 4.511132973s to wait for apiserver health ...
	I0731 20:59:59.081552  187862 cni.go:84] Creating CNI manager for ""
	I0731 20:59:59.081561  187862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:59.083504  187862 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 20:59:55.089279  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:57.589380  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:59.084894  187862 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 20:59:59.098139  187862 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 20:59:59.118458  187862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 20:59:59.128022  187862 system_pods.go:59] 8 kube-system pods found
	I0731 20:59:59.128061  187862 system_pods.go:61] "coredns-7db6d8ff4d-2ks55" [f5ad9d76-5cdc-430e-8933-7e72a2dda95f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 20:59:59.128071  187862 system_pods.go:61] "etcd-embed-certs-831240" [5236ad06-90d9-48f1-964a-efa8f56ee8b5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 20:59:59.128082  187862 system_pods.go:61] "kube-apiserver-embed-certs-831240" [06290f48-0a7b-4d88-9e61-af4c7dde9acd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 20:59:59.128100  187862 system_pods.go:61] "kube-controller-manager-embed-certs-831240" [5d669fab-16cd-4bc6-a880-a1ecfb5d55b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 20:59:59.128113  187862 system_pods.go:61] "kube-proxy-x662j" [9ad0d8a8-94b4-4f3e-b5da-4e5585c28d21] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 20:59:59.128121  187862 system_pods.go:61] "kube-scheduler-embed-certs-831240" [cf9c5922-6468-46e7-84c1-1841f5bc3446] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 20:59:59.128134  187862 system_pods.go:61] "metrics-server-569cc877fc-slbkm" [f93f674b-1f0e-443b-ac06-9c2a5234eeea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 20:59:59.128145  187862 system_pods.go:61] "storage-provisioner" [d3d5fa24-96e8-4ab5-9887-62ff8b82f21d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 20:59:59.128156  187862 system_pods.go:74] duration metric: took 9.673815ms to wait for pod list to return data ...
	I0731 20:59:59.128168  187862 node_conditions.go:102] verifying NodePressure condition ...
	I0731 20:59:59.131825  187862 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 20:59:59.131853  187862 node_conditions.go:123] node cpu capacity is 2
	I0731 20:59:59.131865  187862 node_conditions.go:105] duration metric: took 3.691724ms to run NodePressure ...
	I0731 20:59:59.131897  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:59.494923  187862 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 20:59:59.501848  187862 kubeadm.go:739] kubelet initialised
	I0731 20:59:59.501875  187862 kubeadm.go:740] duration metric: took 6.920816ms waiting for restarted kubelet to initialise ...
	I0731 20:59:59.501885  187862 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:59:59.510503  187862 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:59.518204  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.518234  187862 pod_ready.go:81] duration metric: took 7.702873ms for pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:59.518247  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.518263  187862 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:59.523236  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "etcd-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.523258  187862 pod_ready.go:81] duration metric: took 4.985299ms for pod "etcd-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:59.523266  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "etcd-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.523275  187862 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:59.535237  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.535256  187862 pod_ready.go:81] duration metric: took 11.97449ms for pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:59.535270  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.535275  187862 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:59.541512  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.541531  187862 pod_ready.go:81] duration metric: took 6.24797ms for pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:59.541539  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.541545  187862 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-x662j" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:59.922722  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "kube-proxy-x662j" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.922757  187862 pod_ready.go:81] duration metric: took 381.203526ms for pod "kube-proxy-x662j" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:59.922771  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "kube-proxy-x662j" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.922779  187862 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:00.322049  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:00.322077  187862 pod_ready.go:81] duration metric: took 399.289505ms for pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	E0731 21:00:00.322088  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:00.322094  187862 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:00.722961  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:00.722993  187862 pod_ready.go:81] duration metric: took 400.88956ms for pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace to be "Ready" ...
	E0731 21:00:00.723008  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:00.723017  187862 pod_ready.go:38] duration metric: took 1.221112347s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:00:00.723050  187862 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:00:00.735642  187862 ops.go:34] apiserver oom_adj: -16
	I0731 21:00:00.735697  187862 kubeadm.go:597] duration metric: took 9.133136671s to restartPrimaryControlPlane
	I0731 21:00:00.735735  187862 kubeadm.go:394] duration metric: took 9.182030801s to StartCluster
	I0731 21:00:00.735764  187862 settings.go:142] acquiring lock: {Name:mk1f43113b3e416d694056e4890ac819b020378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:00:00.735860  187862 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 21:00:00.737955  187862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:00:00.738247  187862 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:00:00.738329  187862 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:00:00.738418  187862 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-831240"
	I0731 21:00:00.738432  187862 addons.go:69] Setting default-storageclass=true in profile "embed-certs-831240"
	I0731 21:00:00.738463  187862 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-831240"
	W0731 21:00:00.738475  187862 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:00:00.738481  187862 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-831240"
	I0731 21:00:00.738513  187862 host.go:66] Checking if "embed-certs-831240" exists ...
	I0731 21:00:00.738547  187862 config.go:182] Loaded profile config "embed-certs-831240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:00:00.738581  187862 addons.go:69] Setting metrics-server=true in profile "embed-certs-831240"
	I0731 21:00:00.738651  187862 addons.go:234] Setting addon metrics-server=true in "embed-certs-831240"
	W0731 21:00:00.738666  187862 addons.go:243] addon metrics-server should already be in state true
	I0731 21:00:00.738735  187862 host.go:66] Checking if "embed-certs-831240" exists ...
	I0731 21:00:00.738818  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.738858  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.738897  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.738960  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.739144  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.739190  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.740244  187862 out.go:177] * Verifying Kubernetes components...
	I0731 21:00:00.746003  187862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:00:00.755735  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35867
	I0731 21:00:00.755773  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38437
	I0731 21:00:00.756268  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.756271  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.756594  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35939
	I0731 21:00:00.756820  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.756847  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.756892  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.756917  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.757069  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.757228  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.757254  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.757458  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetState
	I0731 21:00:00.757638  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.757668  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.757745  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.757774  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.758005  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.758543  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.758586  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.761553  187862 addons.go:234] Setting addon default-storageclass=true in "embed-certs-831240"
	W0731 21:00:00.761587  187862 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:00:00.761618  187862 host.go:66] Checking if "embed-certs-831240" exists ...
	I0731 21:00:00.762018  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.762070  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.775492  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42385
	I0731 21:00:00.776091  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.776712  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.776743  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.776760  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35295
	I0731 21:00:00.777245  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.777402  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.777513  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetState
	I0731 21:00:00.777920  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.777945  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.778185  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32907
	I0731 21:00:00.778393  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.778603  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetState
	I0731 21:00:00.778687  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.779223  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.779243  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.779665  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.779718  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 21:00:00.780231  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.780274  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.780612  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 21:00:00.781947  187862 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:00:00.782994  187862 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 20:59:58.110503  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:00.112109  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:00.784194  187862 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:00:00.784216  187862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:00:00.784240  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 21:00:00.784937  187862 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:00:00.784958  187862 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:00:00.784984  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 21:00:00.788544  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.788947  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 21:00:00.788970  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.789003  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.789127  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 21:00:00.789389  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 21:00:00.789521  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 21:00:00.789548  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.789571  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 21:00:00.789773  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 21:00:00.790126  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 21:00:00.790324  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 21:00:00.790502  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 21:00:00.790663  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 21:00:00.799024  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35647
	I0731 21:00:00.799718  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.800341  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.800360  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.800967  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.801258  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetState
	I0731 21:00:00.803078  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 21:00:00.803555  187862 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:00:00.803571  187862 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:00:00.803591  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 21:00:00.809363  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 21:00:00.809461  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.809492  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 21:00:00.809512  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.809680  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 21:00:00.809858  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 21:00:00.810032  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 21:00:00.933963  187862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:00:00.953572  187862 node_ready.go:35] waiting up to 6m0s for node "embed-certs-831240" to be "Ready" ...
	I0731 21:00:01.036486  187862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:00:01.040636  187862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:00:01.040658  187862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 21:00:01.063384  187862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:00:01.068645  187862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:00:01.068675  187862 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:00:01.090838  187862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:00:01.090861  187862 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:00:01.113173  187862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:00:02.099966  187862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.063427097s)
	I0731 21:00:02.100021  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.100035  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.100080  187862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.036657274s)
	I0731 21:00:02.100129  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.100146  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.100338  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.100441  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.100452  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.100461  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.100580  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.100605  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.100615  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.100623  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.100698  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.100709  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Closing plugin on server side
	I0731 21:00:02.100723  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.100866  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.100875  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Closing plugin on server side
	I0731 21:00:02.100882  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.107654  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.107688  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.107952  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.107968  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.108003  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Closing plugin on server side
	I0731 21:00:02.140031  187862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.026799248s)
	I0731 21:00:02.140100  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.140116  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.140424  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Closing plugin on server side
	I0731 21:00:02.140455  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.140470  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.140482  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.140494  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.140772  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Closing plugin on server side
	I0731 21:00:02.140800  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.140808  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.140817  187862 addons.go:475] Verifying addon metrics-server=true in "embed-certs-831240"
	I0731 21:00:02.142583  187862 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 21:00:02.143787  187862 addons.go:510] duration metric: took 1.405477731s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 20:59:58.910774  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:59.410697  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:59.911233  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:00.411170  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:00.911416  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:01.410979  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:01.911444  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:02.411537  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:02.911216  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:03.411386  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:00.089186  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:02.588315  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:02.610109  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:04.610324  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:02.958162  187862 node_ready.go:53] node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:05.458997  187862 node_ready.go:53] node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:03.910942  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:04.411505  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:04.911485  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:05.410763  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:05.910937  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:06.411216  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:06.910743  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:07.410941  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:07.910922  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:08.410593  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:04.589597  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:07.089475  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:09.090023  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:06.610390  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:09.110758  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:07.958154  187862 node_ready.go:49] node "embed-certs-831240" has status "Ready":"True"
	I0731 21:00:07.958180  187862 node_ready.go:38] duration metric: took 7.004576791s for node "embed-certs-831240" to be "Ready" ...
	I0731 21:00:07.958191  187862 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:00:07.969639  187862 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:07.974704  187862 pod_ready.go:92] pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:07.974733  187862 pod_ready.go:81] duration metric: took 5.064645ms for pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:07.974745  187862 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:09.980566  187862 pod_ready.go:102] pod "etcd-embed-certs-831240" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:10.480476  187862 pod_ready.go:92] pod "etcd-embed-certs-831240" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:10.480501  187862 pod_ready.go:81] duration metric: took 2.505748029s for pod "etcd-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:10.480511  187862 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:10.485850  187862 pod_ready.go:92] pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:10.485873  187862 pod_ready.go:81] duration metric: took 5.353478ms for pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:10.485883  187862 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:08.910788  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:09.410807  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:09.911286  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:10.411372  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:10.910748  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:11.411253  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:11.910807  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:12.411208  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:12.910887  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:13.411318  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:11.589454  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:14.090483  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:11.610842  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:14.110306  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:12.492346  187862 pod_ready.go:102] pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:13.991859  187862 pod_ready.go:92] pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:13.991884  187862 pod_ready.go:81] duration metric: took 3.505993775s for pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:13.991893  187862 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x662j" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:13.997932  187862 pod_ready.go:92] pod "kube-proxy-x662j" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:13.997961  187862 pod_ready.go:81] duration metric: took 6.060225ms for pod "kube-proxy-x662j" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:13.997974  187862 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:14.007155  187862 pod_ready.go:92] pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:14.007178  187862 pod_ready.go:81] duration metric: took 9.197289ms for pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:14.007187  187862 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:16.013417  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:13.910943  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:14.410728  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:14.911343  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:15.410545  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:15.910560  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:16.411117  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:16.910537  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:17.410761  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:17.910796  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:18.411138  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:16.589010  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:18.589215  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:16.609886  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:18.610209  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:20.611613  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:18.013504  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:20.513116  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:18.911394  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:19.411098  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:19.910629  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:20.410698  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:20.910760  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:21.410503  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:21.910582  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:22.410724  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:22.910792  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:23.410961  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:21.089938  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:23.588082  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:23.109996  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:25.110361  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:22.514254  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:24.514729  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:27.013263  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:23.910510  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:24.410725  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:24.910807  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:25.411543  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:25.911473  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:26.410494  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:26.910519  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:27.410950  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:27.911528  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:28.411350  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:25.589873  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:27.590134  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:27.612311  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:30.110116  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:29.014386  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:31.014534  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:28.911371  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:29.411269  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:29.911465  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:30.410633  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:30.911166  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:31.411184  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:31.910806  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:32.410806  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:32.911125  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:33.410942  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:33.411021  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:33.461204  188656 cri.go:89] found id: ""
	I0731 21:00:33.461232  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.461241  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:33.461249  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:33.461313  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:33.500898  188656 cri.go:89] found id: ""
	I0731 21:00:33.500927  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.500937  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:33.500944  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:33.501010  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:33.536865  188656 cri.go:89] found id: ""
	I0731 21:00:33.536889  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.536902  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:33.536908  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:33.536957  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:33.578540  188656 cri.go:89] found id: ""
	I0731 21:00:33.578570  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.578582  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:33.578595  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:33.578686  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:33.616242  188656 cri.go:89] found id: ""
	I0731 21:00:33.616266  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.616276  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:33.616283  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:33.616345  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:33.650436  188656 cri.go:89] found id: ""
	I0731 21:00:33.650468  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.650479  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:33.650487  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:33.650552  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:33.687256  188656 cri.go:89] found id: ""
	I0731 21:00:33.687288  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.687300  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:33.687308  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:33.687365  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:33.720381  188656 cri.go:89] found id: ""
	I0731 21:00:33.720428  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.720440  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:33.720453  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:33.720469  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:33.772182  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:33.772226  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:33.787323  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:33.787359  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:00:30.089778  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:32.587877  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:32.110769  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:34.610418  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:33.514142  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:36.013676  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	W0731 21:00:33.907858  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:33.907878  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:33.907892  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:33.974118  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:33.974157  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:36.513427  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:36.527531  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:36.527588  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:36.567679  188656 cri.go:89] found id: ""
	I0731 21:00:36.567706  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.567714  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:36.567726  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:36.567786  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:36.608106  188656 cri.go:89] found id: ""
	I0731 21:00:36.608134  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.608145  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:36.608153  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:36.608215  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:36.651783  188656 cri.go:89] found id: ""
	I0731 21:00:36.651815  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.651824  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:36.651830  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:36.651892  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:36.686716  188656 cri.go:89] found id: ""
	I0731 21:00:36.686743  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.686751  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:36.686758  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:36.686823  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:36.721823  188656 cri.go:89] found id: ""
	I0731 21:00:36.721857  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.721865  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:36.721871  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:36.721939  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:36.758060  188656 cri.go:89] found id: ""
	I0731 21:00:36.758093  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.758103  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:36.758112  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:36.758173  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:36.801667  188656 cri.go:89] found id: ""
	I0731 21:00:36.801694  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.801704  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:36.801712  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:36.801776  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:36.845084  188656 cri.go:89] found id: ""
	I0731 21:00:36.845113  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.845124  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:36.845137  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:36.845152  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:36.897208  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:36.897248  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:36.910716  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:36.910750  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:36.987259  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:36.987285  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:36.987304  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:37.061109  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:37.061144  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:34.589416  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:36.592841  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:39.088346  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:36.611386  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:39.111149  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:38.516701  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:41.017409  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:39.600847  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:39.615897  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:39.615957  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:39.655390  188656 cri.go:89] found id: ""
	I0731 21:00:39.655417  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.655424  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:39.655430  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:39.655502  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:39.694180  188656 cri.go:89] found id: ""
	I0731 21:00:39.694213  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.694224  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:39.694231  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:39.694300  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:39.736752  188656 cri.go:89] found id: ""
	I0731 21:00:39.736783  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.736793  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:39.736801  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:39.736860  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:39.775685  188656 cri.go:89] found id: ""
	I0731 21:00:39.775770  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.775790  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:39.775802  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:39.775871  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:39.816790  188656 cri.go:89] found id: ""
	I0731 21:00:39.816820  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.816829  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:39.816835  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:39.816886  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:39.854931  188656 cri.go:89] found id: ""
	I0731 21:00:39.854963  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.854973  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:39.854981  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:39.855045  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:39.891039  188656 cri.go:89] found id: ""
	I0731 21:00:39.891066  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.891074  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:39.891083  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:39.891136  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:39.927434  188656 cri.go:89] found id: ""
	I0731 21:00:39.927463  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.927473  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:39.927483  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:39.927496  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:39.941240  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:39.941272  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:40.017212  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:40.017233  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:40.017246  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:40.094047  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:40.094081  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:40.138940  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:40.138966  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:42.690818  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:42.704855  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:42.704931  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:42.752315  188656 cri.go:89] found id: ""
	I0731 21:00:42.752347  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.752368  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:42.752376  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:42.752445  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:42.790060  188656 cri.go:89] found id: ""
	I0731 21:00:42.790090  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.790101  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:42.790109  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:42.790220  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:42.825504  188656 cri.go:89] found id: ""
	I0731 21:00:42.825532  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.825540  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:42.825547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:42.825598  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:42.860157  188656 cri.go:89] found id: ""
	I0731 21:00:42.860193  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.860204  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:42.860213  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:42.860286  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:42.902914  188656 cri.go:89] found id: ""
	I0731 21:00:42.902947  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.902959  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:42.902967  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:42.903036  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:42.950503  188656 cri.go:89] found id: ""
	I0731 21:00:42.950532  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.950541  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:42.950550  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:42.950603  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:43.010232  188656 cri.go:89] found id: ""
	I0731 21:00:43.010261  188656 logs.go:276] 0 containers: []
	W0731 21:00:43.010272  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:43.010280  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:43.010344  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:43.045487  188656 cri.go:89] found id: ""
	I0731 21:00:43.045517  188656 logs.go:276] 0 containers: []
	W0731 21:00:43.045527  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:43.045539  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:43.045556  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:43.123248  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:43.123279  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:43.123296  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:43.212230  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:43.212272  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:43.254595  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:43.254626  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:43.306187  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:43.306227  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:41.589806  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:44.088126  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:41.611786  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:44.109436  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:43.513500  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:45.514161  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:45.820246  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:45.835707  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:45.835786  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:45.872079  188656 cri.go:89] found id: ""
	I0731 21:00:45.872110  188656 logs.go:276] 0 containers: []
	W0731 21:00:45.872122  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:45.872130  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:45.872196  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:45.910637  188656 cri.go:89] found id: ""
	I0731 21:00:45.910664  188656 logs.go:276] 0 containers: []
	W0731 21:00:45.910672  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:45.910678  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:45.910740  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:45.945316  188656 cri.go:89] found id: ""
	I0731 21:00:45.945360  188656 logs.go:276] 0 containers: []
	W0731 21:00:45.945372  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:45.945380  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:45.945455  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:45.982015  188656 cri.go:89] found id: ""
	I0731 21:00:45.982046  188656 logs.go:276] 0 containers: []
	W0731 21:00:45.982057  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:45.982096  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:45.982165  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:46.017359  188656 cri.go:89] found id: ""
	I0731 21:00:46.017392  188656 logs.go:276] 0 containers: []
	W0731 21:00:46.017404  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:46.017412  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:46.017478  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:46.054401  188656 cri.go:89] found id: ""
	I0731 21:00:46.054431  188656 logs.go:276] 0 containers: []
	W0731 21:00:46.054447  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:46.054454  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:46.054507  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:46.092107  188656 cri.go:89] found id: ""
	I0731 21:00:46.092130  188656 logs.go:276] 0 containers: []
	W0731 21:00:46.092137  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:46.092143  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:46.092190  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:46.128613  188656 cri.go:89] found id: ""
	I0731 21:00:46.128642  188656 logs.go:276] 0 containers: []
	W0731 21:00:46.128652  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:46.128665  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:46.128679  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:46.144539  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:46.144570  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:46.219399  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:46.219433  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:46.219448  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:46.304486  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:46.304529  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:46.344087  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:46.344121  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:46.090543  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:48.090607  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:46.111072  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:48.610316  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:50.611553  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:48.014287  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:50.513252  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:48.894728  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:48.916610  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:48.916675  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:48.978515  188656 cri.go:89] found id: ""
	I0731 21:00:48.978543  188656 logs.go:276] 0 containers: []
	W0731 21:00:48.978550  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:48.978557  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:48.978615  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:49.026224  188656 cri.go:89] found id: ""
	I0731 21:00:49.026257  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.026268  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:49.026276  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:49.026354  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:49.064967  188656 cri.go:89] found id: ""
	I0731 21:00:49.064994  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.065003  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:49.065010  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:49.065070  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:49.101966  188656 cri.go:89] found id: ""
	I0731 21:00:49.101990  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.101999  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:49.102004  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:49.102056  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:49.137775  188656 cri.go:89] found id: ""
	I0731 21:00:49.137801  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.137809  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:49.137815  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:49.137867  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:49.173778  188656 cri.go:89] found id: ""
	I0731 21:00:49.173824  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.173832  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:49.173839  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:49.173908  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:49.207211  188656 cri.go:89] found id: ""
	I0731 21:00:49.207239  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.207247  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:49.207254  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:49.207333  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:49.244126  188656 cri.go:89] found id: ""
	I0731 21:00:49.244159  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.244180  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:49.244202  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:49.244221  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:49.299606  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:49.299646  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:49.314093  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:49.314121  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:49.384691  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:49.384712  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:49.384728  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:49.464425  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:49.464462  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:52.005670  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:52.019617  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:52.019705  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:52.053452  188656 cri.go:89] found id: ""
	I0731 21:00:52.053485  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.053494  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:52.053500  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:52.053552  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:52.094462  188656 cri.go:89] found id: ""
	I0731 21:00:52.094495  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.094504  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:52.094510  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:52.094572  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:52.134555  188656 cri.go:89] found id: ""
	I0731 21:00:52.134584  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.134595  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:52.134602  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:52.134676  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:52.168805  188656 cri.go:89] found id: ""
	I0731 21:00:52.168851  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.168863  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:52.168871  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:52.168939  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:52.203093  188656 cri.go:89] found id: ""
	I0731 21:00:52.203121  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.203132  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:52.203140  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:52.203213  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:52.237816  188656 cri.go:89] found id: ""
	I0731 21:00:52.237842  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.237850  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:52.237857  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:52.237906  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:52.272136  188656 cri.go:89] found id: ""
	I0731 21:00:52.272175  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.272194  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:52.272202  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:52.272261  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:52.306616  188656 cri.go:89] found id: ""
	I0731 21:00:52.306641  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.306649  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:52.306659  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:52.306671  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:52.372668  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:52.372690  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:52.372707  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:52.457752  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:52.457794  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:52.496087  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:52.496129  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:52.548137  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:52.548176  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:50.588204  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:53.089737  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:53.110034  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:55.110293  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:52.514848  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:55.013623  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:57.015221  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:55.063463  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:55.076922  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:55.077005  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:55.117479  188656 cri.go:89] found id: ""
	I0731 21:00:55.117511  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.117523  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:55.117531  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:55.117595  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:55.156311  188656 cri.go:89] found id: ""
	I0731 21:00:55.156339  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.156348  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:55.156354  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:55.156421  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:55.196778  188656 cri.go:89] found id: ""
	I0731 21:00:55.196807  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.196818  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:55.196826  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:55.196898  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:55.237575  188656 cri.go:89] found id: ""
	I0731 21:00:55.237605  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.237614  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:55.237620  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:55.237672  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:55.271717  188656 cri.go:89] found id: ""
	I0731 21:00:55.271746  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.271754  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:55.271760  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:55.271811  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:55.307586  188656 cri.go:89] found id: ""
	I0731 21:00:55.307618  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.307630  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:55.307637  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:55.307708  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:55.343325  188656 cri.go:89] found id: ""
	I0731 21:00:55.343352  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.343361  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:55.343367  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:55.343418  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:55.378959  188656 cri.go:89] found id: ""
	I0731 21:00:55.378988  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.378997  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:55.379008  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:55.379021  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:55.454213  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:55.454243  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:55.454260  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:55.532802  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:55.532839  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:55.575903  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:55.575940  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:55.635105  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:55.635140  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:58.149801  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:58.162682  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:58.162743  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:58.196220  188656 cri.go:89] found id: ""
	I0731 21:00:58.196245  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.196254  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:58.196260  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:58.196313  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:58.231052  188656 cri.go:89] found id: ""
	I0731 21:00:58.231083  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.231093  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:58.231099  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:58.231156  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:58.265569  188656 cri.go:89] found id: ""
	I0731 21:00:58.265599  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.265612  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:58.265633  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:58.265695  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:58.300750  188656 cri.go:89] found id: ""
	I0731 21:00:58.300779  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.300788  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:58.300793  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:58.300869  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:58.333920  188656 cri.go:89] found id: ""
	I0731 21:00:58.333949  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.333958  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:58.333963  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:58.334015  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:58.368732  188656 cri.go:89] found id: ""
	I0731 21:00:58.368759  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.368771  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:58.368787  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:58.368855  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:58.408454  188656 cri.go:89] found id: ""
	I0731 21:00:58.408488  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.408501  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:58.408510  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:58.408575  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:58.445855  188656 cri.go:89] found id: ""
	I0731 21:00:58.445888  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.445900  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:58.445913  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:58.445934  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:58.496144  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:58.496177  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:58.510708  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:58.510743  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:58.580690  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:58.580712  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:58.580725  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:58.657281  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:58.657320  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:55.591068  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:58.088264  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:57.610282  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:59.611376  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:59.017831  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:01.514115  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:01.196374  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:01.209044  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:01.209111  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:01.247313  188656 cri.go:89] found id: ""
	I0731 21:01:01.247343  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.247353  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:01.247360  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:01.247443  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:01.282269  188656 cri.go:89] found id: ""
	I0731 21:01:01.282300  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.282308  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:01.282314  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:01.282370  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:01.315598  188656 cri.go:89] found id: ""
	I0731 21:01:01.315628  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.315638  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:01.315644  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:01.315697  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:01.352492  188656 cri.go:89] found id: ""
	I0731 21:01:01.352521  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.352533  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:01.352540  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:01.352605  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:01.387858  188656 cri.go:89] found id: ""
	I0731 21:01:01.387885  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.387894  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:01.387900  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:01.387950  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:01.425014  188656 cri.go:89] found id: ""
	I0731 21:01:01.425042  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.425052  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:01.425061  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:01.425129  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:01.463068  188656 cri.go:89] found id: ""
	I0731 21:01:01.463098  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.463107  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:01.463113  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:01.463171  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:01.500174  188656 cri.go:89] found id: ""
	I0731 21:01:01.500203  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.500214  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:01.500229  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:01.500244  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:01.554350  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:01.554389  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:01.569353  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:01.569394  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:01.641074  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:01.641095  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:01.641108  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:01.722340  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:01.722377  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:00.088915  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:02.089981  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:02.109888  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:04.109951  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:04.015302  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:06.513535  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:04.264035  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:04.278374  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:04.278441  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:04.314037  188656 cri.go:89] found id: ""
	I0731 21:01:04.314068  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.314079  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:04.314087  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:04.314159  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:04.347604  188656 cri.go:89] found id: ""
	I0731 21:01:04.347635  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.347646  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:04.347653  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:04.347718  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:04.382412  188656 cri.go:89] found id: ""
	I0731 21:01:04.382442  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.382454  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:04.382462  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:04.382516  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:04.419097  188656 cri.go:89] found id: ""
	I0731 21:01:04.419130  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.419142  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:04.419150  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:04.419209  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:04.464561  188656 cri.go:89] found id: ""
	I0731 21:01:04.464592  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.464601  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:04.464607  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:04.464683  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:04.500484  188656 cri.go:89] found id: ""
	I0731 21:01:04.500510  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.500518  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:04.500524  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:04.500577  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:04.536211  188656 cri.go:89] found id: ""
	I0731 21:01:04.536239  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.536250  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:04.536257  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:04.536324  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:04.569521  188656 cri.go:89] found id: ""
	I0731 21:01:04.569548  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.569556  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:04.569567  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:04.569583  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:04.621228  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:04.621261  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:04.637500  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:04.637527  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:04.710577  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:04.710606  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:04.710623  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:04.788305  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:04.788343  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:07.329209  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:07.343021  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:07.343089  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:07.378556  188656 cri.go:89] found id: ""
	I0731 21:01:07.378588  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.378603  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:07.378610  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:07.378679  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:07.416419  188656 cri.go:89] found id: ""
	I0731 21:01:07.416455  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.416467  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:07.416474  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:07.416538  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:07.454720  188656 cri.go:89] found id: ""
	I0731 21:01:07.454749  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.454758  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:07.454764  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:07.454815  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:07.488963  188656 cri.go:89] found id: ""
	I0731 21:01:07.488995  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.489004  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:07.489009  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:07.489060  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:07.531916  188656 cri.go:89] found id: ""
	I0731 21:01:07.531949  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.531961  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:07.531967  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:07.532019  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:07.569233  188656 cri.go:89] found id: ""
	I0731 21:01:07.569266  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.569275  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:07.569281  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:07.569350  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:07.606318  188656 cri.go:89] found id: ""
	I0731 21:01:07.606349  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.606360  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:07.606368  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:07.606442  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:07.641408  188656 cri.go:89] found id: ""
	I0731 21:01:07.641436  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.641445  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:07.641454  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:07.641466  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:07.681094  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:07.681123  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:07.734600  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:07.734641  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:07.748747  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:07.748779  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:07.821775  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:07.821799  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:07.821816  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:04.590174  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:07.089655  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:06.110694  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:08.610381  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:10.611128  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:09.013688  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:11.513361  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:10.399973  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:10.412908  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:10.412986  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:10.448866  188656 cri.go:89] found id: ""
	I0731 21:01:10.448895  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.448903  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:10.448909  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:10.448966  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:10.486309  188656 cri.go:89] found id: ""
	I0731 21:01:10.486338  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.486346  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:10.486352  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:10.486411  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:10.522834  188656 cri.go:89] found id: ""
	I0731 21:01:10.522856  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.522863  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:10.522870  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:10.522929  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:10.558272  188656 cri.go:89] found id: ""
	I0731 21:01:10.558304  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.558324  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:10.558330  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:10.558391  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:10.596560  188656 cri.go:89] found id: ""
	I0731 21:01:10.596589  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.596600  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:10.596608  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:10.596668  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:10.633488  188656 cri.go:89] found id: ""
	I0731 21:01:10.633518  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.633529  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:10.633537  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:10.633597  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:10.665779  188656 cri.go:89] found id: ""
	I0731 21:01:10.665812  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.665824  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:10.665832  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:10.665895  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:10.700526  188656 cri.go:89] found id: ""
	I0731 21:01:10.700556  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.700564  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:10.700575  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:10.700587  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:10.753507  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:10.753550  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:10.768056  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:10.768089  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:10.842120  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:10.842142  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:10.842159  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:10.916532  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:10.916565  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:13.456826  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:13.471064  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:13.471130  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:13.505660  188656 cri.go:89] found id: ""
	I0731 21:01:13.505694  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.505707  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:13.505713  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:13.505775  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:13.543084  188656 cri.go:89] found id: ""
	I0731 21:01:13.543109  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.543117  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:13.543123  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:13.543182  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:13.578940  188656 cri.go:89] found id: ""
	I0731 21:01:13.578966  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.578974  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:13.578981  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:13.579047  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:13.617710  188656 cri.go:89] found id: ""
	I0731 21:01:13.617733  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.617740  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:13.617747  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:13.617810  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:13.653535  188656 cri.go:89] found id: ""
	I0731 21:01:13.653567  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.653579  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:13.653587  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:13.653658  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:13.687914  188656 cri.go:89] found id: ""
	I0731 21:01:13.687942  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.687953  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:13.687960  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:13.688031  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:13.725242  188656 cri.go:89] found id: ""
	I0731 21:01:13.725278  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.725287  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:13.725293  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:13.725372  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:13.760890  188656 cri.go:89] found id: ""
	I0731 21:01:13.760918  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.760929  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:13.760943  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:13.760958  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:13.810212  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:13.810252  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:13.824229  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:13.824259  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:01:09.588945  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:12.088514  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:14.088684  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:13.109760  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:15.109938  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:13.515603  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:16.013268  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	W0731 21:01:13.895306  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:13.895331  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:13.895344  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:13.976366  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:13.976411  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:16.520165  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:16.533970  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:16.534035  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:16.571444  188656 cri.go:89] found id: ""
	I0731 21:01:16.571474  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.571482  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:16.571488  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:16.571539  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:16.608150  188656 cri.go:89] found id: ""
	I0731 21:01:16.608176  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.608186  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:16.608194  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:16.608254  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:16.643252  188656 cri.go:89] found id: ""
	I0731 21:01:16.643283  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.643294  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:16.643302  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:16.643363  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:16.679521  188656 cri.go:89] found id: ""
	I0731 21:01:16.679552  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.679563  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:16.679571  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:16.679624  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:16.713502  188656 cri.go:89] found id: ""
	I0731 21:01:16.713532  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.713541  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:16.713547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:16.713624  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:16.748276  188656 cri.go:89] found id: ""
	I0731 21:01:16.748309  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.748318  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:16.748324  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:16.748383  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:16.783895  188656 cri.go:89] found id: ""
	I0731 21:01:16.783929  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.783940  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:16.783948  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:16.784014  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:16.817362  188656 cri.go:89] found id: ""
	I0731 21:01:16.817392  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.817415  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:16.817425  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:16.817440  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:16.872584  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:16.872637  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:16.887240  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:16.887275  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:16.961920  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:16.961949  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:16.961967  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:17.041889  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:17.041924  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:16.089420  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:18.089611  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:17.110442  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:19.111424  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:18.013772  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:20.514737  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:19.585935  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:19.600389  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:19.600475  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:19.635883  188656 cri.go:89] found id: ""
	I0731 21:01:19.635913  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.635924  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:19.635932  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:19.635995  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:19.674413  188656 cri.go:89] found id: ""
	I0731 21:01:19.674441  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.674459  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:19.674471  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:19.674538  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:19.708181  188656 cri.go:89] found id: ""
	I0731 21:01:19.708211  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.708219  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:19.708224  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:19.708292  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:19.744737  188656 cri.go:89] found id: ""
	I0731 21:01:19.744774  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.744783  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:19.744791  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:19.744849  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:19.784366  188656 cri.go:89] found id: ""
	I0731 21:01:19.784398  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.784406  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:19.784412  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:19.784465  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:19.819234  188656 cri.go:89] found id: ""
	I0731 21:01:19.819269  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.819280  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:19.819289  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:19.819355  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:19.851462  188656 cri.go:89] found id: ""
	I0731 21:01:19.851494  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.851503  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:19.851510  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:19.851563  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:19.896575  188656 cri.go:89] found id: ""
	I0731 21:01:19.896604  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.896612  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:19.896624  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:19.896640  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:19.952239  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:19.952284  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:19.969411  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:19.969442  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:20.042820  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:20.042847  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:20.042863  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:20.130070  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:20.130115  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:22.674956  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:22.688548  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:22.688616  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:22.728750  188656 cri.go:89] found id: ""
	I0731 21:01:22.728775  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.728784  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:22.728790  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:22.728844  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:22.763765  188656 cri.go:89] found id: ""
	I0731 21:01:22.763793  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.763801  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:22.763807  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:22.763858  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:22.799134  188656 cri.go:89] found id: ""
	I0731 21:01:22.799163  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.799172  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:22.799178  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:22.799237  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:22.833972  188656 cri.go:89] found id: ""
	I0731 21:01:22.833998  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.834005  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:22.834011  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:22.834060  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:22.869686  188656 cri.go:89] found id: ""
	I0731 21:01:22.869711  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.869719  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:22.869724  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:22.869776  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:22.907919  188656 cri.go:89] found id: ""
	I0731 21:01:22.907950  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.907961  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:22.907969  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:22.908035  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:22.947162  188656 cri.go:89] found id: ""
	I0731 21:01:22.947192  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.947204  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:22.947212  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:22.947273  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:22.992822  188656 cri.go:89] found id: ""
	I0731 21:01:22.992860  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.992872  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:22.992884  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:22.992900  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:23.045552  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:23.045589  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:23.059895  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:23.059925  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:23.135535  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:23.135561  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:23.135577  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:23.217468  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:23.217521  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:20.588507  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:22.588759  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:21.611467  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:24.110813  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:22.514805  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:25.012583  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:27.013095  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:25.771615  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:25.785037  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:25.785115  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:25.821070  188656 cri.go:89] found id: ""
	I0731 21:01:25.821100  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.821112  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:25.821120  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:25.821176  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:25.856174  188656 cri.go:89] found id: ""
	I0731 21:01:25.856206  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.856217  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:25.856225  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:25.856288  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:25.889440  188656 cri.go:89] found id: ""
	I0731 21:01:25.889473  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.889483  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:25.889490  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:25.889546  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:25.924770  188656 cri.go:89] found id: ""
	I0731 21:01:25.924796  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.924804  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:25.924811  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:25.924860  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:25.963529  188656 cri.go:89] found id: ""
	I0731 21:01:25.963576  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.963588  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:25.963595  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:25.963670  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:26.000033  188656 cri.go:89] found id: ""
	I0731 21:01:26.000060  188656 logs.go:276] 0 containers: []
	W0731 21:01:26.000069  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:26.000076  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:26.000133  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:26.035310  188656 cri.go:89] found id: ""
	I0731 21:01:26.035341  188656 logs.go:276] 0 containers: []
	W0731 21:01:26.035353  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:26.035359  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:26.035423  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:26.070096  188656 cri.go:89] found id: ""
	I0731 21:01:26.070119  188656 logs.go:276] 0 containers: []
	W0731 21:01:26.070127  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:26.070138  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:26.070149  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:26.141198  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:26.141220  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:26.141237  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:26.219766  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:26.219805  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:26.264836  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:26.264864  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:26.316672  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:26.316709  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:28.832882  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:24.588907  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:27.088961  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:29.089538  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:26.111336  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:28.609453  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:30.610379  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:29.014929  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:31.512827  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:28.846243  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:28.846307  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:28.880312  188656 cri.go:89] found id: ""
	I0731 21:01:28.880339  188656 logs.go:276] 0 containers: []
	W0731 21:01:28.880350  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:28.880358  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:28.880419  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:28.914625  188656 cri.go:89] found id: ""
	I0731 21:01:28.914652  188656 logs.go:276] 0 containers: []
	W0731 21:01:28.914660  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:28.914667  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:28.914726  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:28.949138  188656 cri.go:89] found id: ""
	I0731 21:01:28.949173  188656 logs.go:276] 0 containers: []
	W0731 21:01:28.949185  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:28.949192  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:28.949264  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:28.985229  188656 cri.go:89] found id: ""
	I0731 21:01:28.985258  188656 logs.go:276] 0 containers: []
	W0731 21:01:28.985266  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:28.985272  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:28.985326  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:29.021520  188656 cri.go:89] found id: ""
	I0731 21:01:29.021550  188656 logs.go:276] 0 containers: []
	W0731 21:01:29.021562  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:29.021568  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:29.021629  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:29.058639  188656 cri.go:89] found id: ""
	I0731 21:01:29.058671  188656 logs.go:276] 0 containers: []
	W0731 21:01:29.058682  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:29.058690  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:29.058755  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:29.105435  188656 cri.go:89] found id: ""
	I0731 21:01:29.105458  188656 logs.go:276] 0 containers: []
	W0731 21:01:29.105466  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:29.105472  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:29.105528  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:29.147118  188656 cri.go:89] found id: ""
	I0731 21:01:29.147144  188656 logs.go:276] 0 containers: []
	W0731 21:01:29.147152  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:29.147161  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:29.147177  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:29.231698  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:29.231735  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:29.276163  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:29.276200  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:29.330551  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:29.330589  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:29.350293  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:29.350323  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:29.456073  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:31.956964  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:31.970712  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:31.970780  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:32.009546  188656 cri.go:89] found id: ""
	I0731 21:01:32.009574  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.009585  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:32.009593  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:32.009674  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:32.046622  188656 cri.go:89] found id: ""
	I0731 21:01:32.046661  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.046672  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:32.046680  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:32.046748  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:32.080958  188656 cri.go:89] found id: ""
	I0731 21:01:32.080985  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.080993  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:32.080998  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:32.081052  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:32.117454  188656 cri.go:89] found id: ""
	I0731 21:01:32.117480  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.117489  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:32.117495  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:32.117561  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:32.152335  188656 cri.go:89] found id: ""
	I0731 21:01:32.152369  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.152380  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:32.152387  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:32.152441  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:32.186631  188656 cri.go:89] found id: ""
	I0731 21:01:32.186670  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.186682  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:32.186691  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:32.186761  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:32.221496  188656 cri.go:89] found id: ""
	I0731 21:01:32.221533  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.221544  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:32.221551  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:32.221632  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:32.256315  188656 cri.go:89] found id: ""
	I0731 21:01:32.256341  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.256350  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:32.256360  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:32.256372  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:32.295759  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:32.295788  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:32.347855  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:32.347888  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:32.360982  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:32.361012  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:32.433900  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:32.433926  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:32.433947  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:31.588474  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:33.590513  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:32.610672  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:35.110698  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:33.514600  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:36.013157  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:35.013369  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:35.027203  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:35.027298  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:35.065567  188656 cri.go:89] found id: ""
	I0731 21:01:35.065599  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.065610  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:35.065617  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:35.065686  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:35.104285  188656 cri.go:89] found id: ""
	I0731 21:01:35.104317  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.104328  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:35.104335  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:35.104430  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:35.151081  188656 cri.go:89] found id: ""
	I0731 21:01:35.151108  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.151119  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:35.151127  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:35.151190  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:35.196844  188656 cri.go:89] found id: ""
	I0731 21:01:35.196875  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.196886  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:35.196894  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:35.196964  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:35.253581  188656 cri.go:89] found id: ""
	I0731 21:01:35.253612  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.253623  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:35.253630  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:35.253703  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:35.295791  188656 cri.go:89] found id: ""
	I0731 21:01:35.295819  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.295830  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:35.295838  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:35.295904  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:35.329405  188656 cri.go:89] found id: ""
	I0731 21:01:35.329441  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.329454  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:35.329462  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:35.329526  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:35.363976  188656 cri.go:89] found id: ""
	I0731 21:01:35.364009  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.364022  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:35.364035  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:35.364051  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:35.421213  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:35.421253  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:35.436612  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:35.436646  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:35.514154  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:35.514182  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:35.514197  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:35.588048  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:35.588082  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:38.133466  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:38.147071  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:38.147142  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:38.179992  188656 cri.go:89] found id: ""
	I0731 21:01:38.180024  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.180036  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:38.180044  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:38.180116  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:38.213784  188656 cri.go:89] found id: ""
	I0731 21:01:38.213816  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.213827  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:38.213834  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:38.213901  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:38.254190  188656 cri.go:89] found id: ""
	I0731 21:01:38.254220  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.254229  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:38.254235  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:38.254284  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:38.289695  188656 cri.go:89] found id: ""
	I0731 21:01:38.289732  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.289743  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:38.289751  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:38.289819  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:38.327743  188656 cri.go:89] found id: ""
	I0731 21:01:38.327777  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.327788  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:38.327797  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:38.327853  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:38.361373  188656 cri.go:89] found id: ""
	I0731 21:01:38.361409  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.361421  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:38.361428  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:38.361501  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:38.396832  188656 cri.go:89] found id: ""
	I0731 21:01:38.396860  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.396868  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:38.396873  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:38.396923  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:38.431822  188656 cri.go:89] found id: ""
	I0731 21:01:38.431855  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.431868  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:38.431880  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:38.431895  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:38.481994  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:38.482028  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:38.495885  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:38.495911  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:38.563384  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:38.563411  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:38.563437  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:38.646806  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:38.646848  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:36.089465  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:38.590301  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:37.611057  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:40.110731  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:38.015769  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:40.513690  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:41.187323  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:41.200995  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:41.201063  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:41.241620  188656 cri.go:89] found id: ""
	I0731 21:01:41.241651  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.241663  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:41.241671  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:41.241745  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:41.279565  188656 cri.go:89] found id: ""
	I0731 21:01:41.279595  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.279604  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:41.279609  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:41.279666  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:41.320710  188656 cri.go:89] found id: ""
	I0731 21:01:41.320744  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.320755  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:41.320763  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:41.320834  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:41.356428  188656 cri.go:89] found id: ""
	I0731 21:01:41.356460  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.356472  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:41.356480  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:41.356544  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:41.390493  188656 cri.go:89] found id: ""
	I0731 21:01:41.390525  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.390536  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:41.390544  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:41.390612  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:41.424244  188656 cri.go:89] found id: ""
	I0731 21:01:41.424271  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.424282  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:41.424290  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:41.424350  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:41.459916  188656 cri.go:89] found id: ""
	I0731 21:01:41.459946  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.459955  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:41.459961  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:41.460012  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:41.493891  188656 cri.go:89] found id: ""
	I0731 21:01:41.493917  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.493926  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:41.493936  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:41.493950  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:41.544066  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:41.544106  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:41.558504  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:41.558534  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:41.632996  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:41.633021  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:41.633039  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:41.712637  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:41.712677  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:41.087979  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:43.088834  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:42.610136  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:45.109986  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:42.514059  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:44.514535  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:47.014970  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:44.255947  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:44.268961  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:44.269050  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:44.304621  188656 cri.go:89] found id: ""
	I0731 21:01:44.304656  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.304668  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:44.304676  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:44.304732  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:44.339389  188656 cri.go:89] found id: ""
	I0731 21:01:44.339429  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.339441  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:44.339448  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:44.339510  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:44.373069  188656 cri.go:89] found id: ""
	I0731 21:01:44.373095  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.373103  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:44.373110  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:44.373179  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:44.408784  188656 cri.go:89] found id: ""
	I0731 21:01:44.408812  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.408821  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:44.408829  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:44.408896  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:44.445636  188656 cri.go:89] found id: ""
	I0731 21:01:44.445671  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.445682  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:44.445690  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:44.445759  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:44.483529  188656 cri.go:89] found id: ""
	I0731 21:01:44.483565  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.483577  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:44.483585  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:44.483643  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:44.517959  188656 cri.go:89] found id: ""
	I0731 21:01:44.517980  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.517987  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:44.517993  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:44.518042  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:44.552322  188656 cri.go:89] found id: ""
	I0731 21:01:44.552367  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.552392  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:44.552405  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:44.552421  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:44.625005  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:44.625030  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:44.625043  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:44.702547  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:44.702585  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:44.741754  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:44.741792  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:44.795179  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:44.795216  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:47.309995  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:47.323993  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:47.324076  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:47.365546  188656 cri.go:89] found id: ""
	I0731 21:01:47.365576  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.365587  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:47.365595  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:47.365682  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:47.402774  188656 cri.go:89] found id: ""
	I0731 21:01:47.402810  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.402822  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:47.402831  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:47.402899  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:47.440716  188656 cri.go:89] found id: ""
	I0731 21:01:47.440746  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.440755  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:47.440761  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:47.440811  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:47.479418  188656 cri.go:89] found id: ""
	I0731 21:01:47.479450  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.479461  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:47.479469  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:47.479535  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:47.514027  188656 cri.go:89] found id: ""
	I0731 21:01:47.514065  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.514074  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:47.514081  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:47.514149  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:47.550178  188656 cri.go:89] found id: ""
	I0731 21:01:47.550203  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.550212  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:47.550218  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:47.550271  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:47.587844  188656 cri.go:89] found id: ""
	I0731 21:01:47.587873  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.587883  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:47.587891  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:47.587945  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:47.627581  188656 cri.go:89] found id: ""
	I0731 21:01:47.627608  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.627620  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:47.627633  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:47.627647  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:47.683364  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:47.683408  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:47.697882  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:47.697917  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:47.773804  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:47.773834  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:47.773848  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:47.859356  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:47.859404  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:45.090199  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:47.091328  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:47.610631  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:50.109476  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:49.514186  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:52.013486  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:50.402403  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:50.417269  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:50.417332  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:50.452762  188656 cri.go:89] found id: ""
	I0731 21:01:50.452786  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.452793  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:50.452799  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:50.452852  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:50.486741  188656 cri.go:89] found id: ""
	I0731 21:01:50.486771  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.486782  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:50.486789  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:50.486855  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:50.526144  188656 cri.go:89] found id: ""
	I0731 21:01:50.526174  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.526185  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:50.526193  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:50.526246  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:50.560957  188656 cri.go:89] found id: ""
	I0731 21:01:50.560985  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.560995  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:50.561003  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:50.561065  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:50.597228  188656 cri.go:89] found id: ""
	I0731 21:01:50.597258  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.597269  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:50.597275  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:50.597357  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:50.638153  188656 cri.go:89] found id: ""
	I0731 21:01:50.638183  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.638199  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:50.638208  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:50.638270  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:50.672236  188656 cri.go:89] found id: ""
	I0731 21:01:50.672266  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.672274  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:50.672280  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:50.672340  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:50.704069  188656 cri.go:89] found id: ""
	I0731 21:01:50.704093  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.704102  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:50.704112  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:50.704125  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:50.757973  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:50.758010  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:50.771203  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:50.771229  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:50.842937  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:50.842956  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:50.842969  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:50.925819  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:50.925857  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:53.470691  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:53.485260  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:53.485332  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:53.524110  188656 cri.go:89] found id: ""
	I0731 21:01:53.524139  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.524148  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:53.524154  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:53.524215  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:53.557642  188656 cri.go:89] found id: ""
	I0731 21:01:53.557668  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.557676  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:53.557682  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:53.557737  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:53.595594  188656 cri.go:89] found id: ""
	I0731 21:01:53.595622  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.595641  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:53.595647  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:53.595712  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:53.634458  188656 cri.go:89] found id: ""
	I0731 21:01:53.634487  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.634499  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:53.634507  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:53.634567  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:53.674124  188656 cri.go:89] found id: ""
	I0731 21:01:53.674149  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.674157  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:53.674164  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:53.674234  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:53.706861  188656 cri.go:89] found id: ""
	I0731 21:01:53.706888  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.706897  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:53.706903  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:53.706957  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:53.745476  188656 cri.go:89] found id: ""
	I0731 21:01:53.745504  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.745511  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:53.745522  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:53.745575  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:53.780847  188656 cri.go:89] found id: ""
	I0731 21:01:53.780878  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.780889  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:53.780902  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:53.780922  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:01:49.589017  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:52.088587  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:54.088885  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:52.109889  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:54.110634  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:54.014383  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:56.512884  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	W0731 21:01:53.853469  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:53.853497  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:53.853517  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:53.930506  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:53.930544  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:53.975439  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:53.975475  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:54.027903  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:54.027937  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:56.542860  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:56.557744  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:56.557813  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:56.596034  188656 cri.go:89] found id: ""
	I0731 21:01:56.596065  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.596075  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:56.596082  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:56.596146  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:56.631531  188656 cri.go:89] found id: ""
	I0731 21:01:56.631561  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.631572  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:56.631579  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:56.631653  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:56.665824  188656 cri.go:89] found id: ""
	I0731 21:01:56.665853  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.665865  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:56.665872  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:56.665940  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:56.698965  188656 cri.go:89] found id: ""
	I0731 21:01:56.698993  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.699002  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:56.699008  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:56.699074  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:56.735314  188656 cri.go:89] found id: ""
	I0731 21:01:56.735347  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.735359  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:56.735367  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:56.735443  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:56.770350  188656 cri.go:89] found id: ""
	I0731 21:01:56.770383  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.770393  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:56.770402  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:56.770485  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:56.808934  188656 cri.go:89] found id: ""
	I0731 21:01:56.808962  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.808970  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:56.808976  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:56.809027  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:56.845305  188656 cri.go:89] found id: ""
	I0731 21:01:56.845331  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.845354  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:56.845366  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:56.845383  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:56.922810  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:56.922832  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:56.922846  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:56.998009  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:56.998046  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:57.037905  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:57.037934  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:57.092438  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:57.092469  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:56.591334  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:59.089696  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:56.110825  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:58.111013  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:00.111696  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:58.513270  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:00.514474  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:59.608087  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:59.622465  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:59.622537  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:59.660221  188656 cri.go:89] found id: ""
	I0731 21:01:59.660254  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.660265  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:59.660274  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:59.660338  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:59.696158  188656 cri.go:89] found id: ""
	I0731 21:01:59.696193  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.696205  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:59.696213  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:59.696272  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:59.733607  188656 cri.go:89] found id: ""
	I0731 21:01:59.733635  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.733646  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:59.733656  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:59.733727  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:59.770298  188656 cri.go:89] found id: ""
	I0731 21:01:59.770327  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.770336  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:59.770342  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:59.770396  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:59.805630  188656 cri.go:89] found id: ""
	I0731 21:01:59.805659  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.805670  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:59.805682  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:59.805749  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:59.841064  188656 cri.go:89] found id: ""
	I0731 21:01:59.841089  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.841098  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:59.841106  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:59.841166  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:59.877237  188656 cri.go:89] found id: ""
	I0731 21:01:59.877265  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.877274  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:59.877284  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:59.877364  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:59.917102  188656 cri.go:89] found id: ""
	I0731 21:01:59.917138  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.917166  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:59.917179  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:59.917196  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:59.971806  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:59.971846  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:59.986267  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:59.986304  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:00.063185  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:00.063227  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:00.063244  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:00.148498  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:00.148541  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:02.690235  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:02.704623  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:02.704703  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:02.740557  188656 cri.go:89] found id: ""
	I0731 21:02:02.740588  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.740599  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:02.740606  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:02.740667  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:02.776340  188656 cri.go:89] found id: ""
	I0731 21:02:02.776382  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.776391  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:02.776396  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:02.776449  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:02.811645  188656 cri.go:89] found id: ""
	I0731 21:02:02.811673  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.811683  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:02.811691  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:02.811754  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:02.847226  188656 cri.go:89] found id: ""
	I0731 21:02:02.847259  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.847267  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:02.847273  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:02.847326  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:02.885591  188656 cri.go:89] found id: ""
	I0731 21:02:02.885617  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.885626  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:02.885631  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:02.885694  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:02.924250  188656 cri.go:89] found id: ""
	I0731 21:02:02.924281  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.924289  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:02.924296  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:02.924358  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:02.959608  188656 cri.go:89] found id: ""
	I0731 21:02:02.959638  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.959649  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:02.959657  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:02.959731  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:02.998175  188656 cri.go:89] found id: ""
	I0731 21:02:02.998205  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.998215  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:02.998228  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:02.998248  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:03.053320  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:03.053382  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:03.067681  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:03.067711  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:03.145222  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:03.145251  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:03.145270  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:03.228413  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:03.228456  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:01.590197  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:04.087692  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:02.610477  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:05.110544  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:03.016030  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:05.513082  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:05.780407  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:05.793872  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:05.793952  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:05.828940  188656 cri.go:89] found id: ""
	I0731 21:02:05.828971  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.828980  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:05.828987  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:05.829051  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:05.866470  188656 cri.go:89] found id: ""
	I0731 21:02:05.866503  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.866515  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:05.866522  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:05.866594  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:05.904756  188656 cri.go:89] found id: ""
	I0731 21:02:05.904792  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.904807  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:05.904814  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:05.904868  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:05.941534  188656 cri.go:89] found id: ""
	I0731 21:02:05.941564  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.941574  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:05.941581  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:05.941649  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:05.980413  188656 cri.go:89] found id: ""
	I0731 21:02:05.980453  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.980465  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:05.980472  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:05.980563  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:06.023226  188656 cri.go:89] found id: ""
	I0731 21:02:06.023258  188656 logs.go:276] 0 containers: []
	W0731 21:02:06.023269  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:06.023277  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:06.023345  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:06.061098  188656 cri.go:89] found id: ""
	I0731 21:02:06.061130  188656 logs.go:276] 0 containers: []
	W0731 21:02:06.061138  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:06.061145  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:06.061195  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:06.097825  188656 cri.go:89] found id: ""
	I0731 21:02:06.097852  188656 logs.go:276] 0 containers: []
	W0731 21:02:06.097860  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:06.097870  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:06.097883  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:06.149181  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:06.149223  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:06.164610  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:06.164651  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:06.248639  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:06.248666  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:06.248684  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:06.332445  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:06.332486  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:06.089967  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:08.588610  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:07.610691  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:09.611166  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:07.513999  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:09.514554  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:11.516493  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:08.873697  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:08.887632  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:08.887745  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:08.926002  188656 cri.go:89] found id: ""
	I0731 21:02:08.926032  188656 logs.go:276] 0 containers: []
	W0731 21:02:08.926042  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:08.926051  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:08.926117  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:08.962999  188656 cri.go:89] found id: ""
	I0731 21:02:08.963028  188656 logs.go:276] 0 containers: []
	W0731 21:02:08.963039  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:08.963047  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:08.963103  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:09.023016  188656 cri.go:89] found id: ""
	I0731 21:02:09.023043  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.023051  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:09.023057  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:09.023109  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:09.059672  188656 cri.go:89] found id: ""
	I0731 21:02:09.059699  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.059708  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:09.059714  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:09.059774  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:09.097603  188656 cri.go:89] found id: ""
	I0731 21:02:09.097635  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.097645  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:09.097653  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:09.097720  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:09.136210  188656 cri.go:89] found id: ""
	I0731 21:02:09.136240  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.136251  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:09.136259  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:09.136326  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:09.176167  188656 cri.go:89] found id: ""
	I0731 21:02:09.176204  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.176211  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:09.176218  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:09.176277  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:09.214151  188656 cri.go:89] found id: ""
	I0731 21:02:09.214180  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.214189  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:09.214199  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:09.214212  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:09.267579  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:09.267618  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:09.282420  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:09.282445  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:09.354067  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:09.354092  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:09.354111  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:09.433454  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:09.433500  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:11.979715  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:11.993050  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:11.993123  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:12.027731  188656 cri.go:89] found id: ""
	I0731 21:02:12.027759  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.027767  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:12.027773  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:12.027834  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:12.064410  188656 cri.go:89] found id: ""
	I0731 21:02:12.064442  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.064452  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:12.064459  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:12.064525  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:12.101061  188656 cri.go:89] found id: ""
	I0731 21:02:12.101096  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.101107  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:12.101115  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:12.101176  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:12.142240  188656 cri.go:89] found id: ""
	I0731 21:02:12.142271  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.142284  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:12.142292  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:12.142357  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:12.184949  188656 cri.go:89] found id: ""
	I0731 21:02:12.184980  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.184988  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:12.184994  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:12.185064  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:12.226031  188656 cri.go:89] found id: ""
	I0731 21:02:12.226068  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.226080  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:12.226089  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:12.226155  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:12.272880  188656 cri.go:89] found id: ""
	I0731 21:02:12.272913  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.272923  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:12.272931  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:12.272989  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:12.306968  188656 cri.go:89] found id: ""
	I0731 21:02:12.307011  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.307033  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:12.307068  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:12.307090  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:12.359357  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:12.359402  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:12.374817  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:12.374848  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:12.445107  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:12.445128  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:12.445141  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:12.530017  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:12.530058  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:11.088281  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:13.090442  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:12.110720  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:14.611142  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:14.013967  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:16.014021  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:15.070277  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:15.084326  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:15.084411  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:15.123513  188656 cri.go:89] found id: ""
	I0731 21:02:15.123549  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.123562  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:15.123569  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:15.123624  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:15.159855  188656 cri.go:89] found id: ""
	I0731 21:02:15.159888  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.159899  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:15.159908  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:15.159973  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:15.195879  188656 cri.go:89] found id: ""
	I0731 21:02:15.195911  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.195919  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:15.195926  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:15.195986  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:15.231216  188656 cri.go:89] found id: ""
	I0731 21:02:15.231249  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.231258  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:15.231265  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:15.231331  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:15.265711  188656 cri.go:89] found id: ""
	I0731 21:02:15.265740  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.265748  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:15.265754  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:15.265803  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:15.300991  188656 cri.go:89] found id: ""
	I0731 21:02:15.301020  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.301027  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:15.301033  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:15.301083  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:15.338507  188656 cri.go:89] found id: ""
	I0731 21:02:15.338533  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.338542  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:15.338550  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:15.338614  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:15.375540  188656 cri.go:89] found id: ""
	I0731 21:02:15.375583  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.375595  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:15.375606  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:15.375631  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:15.428903  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:15.428946  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:15.444018  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:15.444052  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:15.518807  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:15.518842  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:15.518859  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:15.602655  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:15.602693  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:18.158731  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:18.172861  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:18.172940  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:18.207451  188656 cri.go:89] found id: ""
	I0731 21:02:18.207480  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.207489  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:18.207495  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:18.207555  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:18.244974  188656 cri.go:89] found id: ""
	I0731 21:02:18.245004  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.245013  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:18.245019  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:18.245079  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:18.281589  188656 cri.go:89] found id: ""
	I0731 21:02:18.281622  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.281630  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:18.281637  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:18.281698  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:18.321413  188656 cri.go:89] found id: ""
	I0731 21:02:18.321445  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.321455  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:18.321461  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:18.321526  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:18.360600  188656 cri.go:89] found id: ""
	I0731 21:02:18.360627  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.360639  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:18.360647  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:18.360707  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:18.396312  188656 cri.go:89] found id: ""
	I0731 21:02:18.396344  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.396356  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:18.396364  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:18.396451  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:18.431586  188656 cri.go:89] found id: ""
	I0731 21:02:18.431618  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.431630  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:18.431637  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:18.431711  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:18.472995  188656 cri.go:89] found id: ""
	I0731 21:02:18.473025  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.473035  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:18.473047  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:18.473063  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:18.558826  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:18.558865  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:18.600083  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:18.600110  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:18.657944  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:18.657988  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:18.672860  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:18.672888  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:18.748806  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:15.589795  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:18.088699  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:17.112784  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:19.609312  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:18.513798  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:21.014437  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:21.249418  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:21.263304  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:21.263385  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:21.298591  188656 cri.go:89] found id: ""
	I0731 21:02:21.298624  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.298635  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:21.298643  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:21.298707  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:21.335913  188656 cri.go:89] found id: ""
	I0731 21:02:21.335939  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.335947  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:21.335954  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:21.336011  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:21.378314  188656 cri.go:89] found id: ""
	I0731 21:02:21.378347  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.378359  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:21.378368  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:21.378436  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:21.422707  188656 cri.go:89] found id: ""
	I0731 21:02:21.422738  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.422748  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:21.422757  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:21.422826  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:21.487851  188656 cri.go:89] found id: ""
	I0731 21:02:21.487878  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.487887  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:21.487893  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:21.487946  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:21.528944  188656 cri.go:89] found id: ""
	I0731 21:02:21.528970  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.528981  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:21.528990  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:21.529054  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:21.565091  188656 cri.go:89] found id: ""
	I0731 21:02:21.565118  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.565126  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:21.565132  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:21.565182  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:21.599985  188656 cri.go:89] found id: ""
	I0731 21:02:21.600015  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.600027  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:21.600041  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:21.600057  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:21.652065  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:21.652106  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:21.666497  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:21.666528  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:21.741853  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:21.741893  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:21.741919  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:21.822478  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:21.822517  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:20.089186  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:22.589558  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:21.610996  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:24.111590  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:23.513209  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:25.514400  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:24.363018  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:24.375640  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:24.375704  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:24.411383  188656 cri.go:89] found id: ""
	I0731 21:02:24.411416  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.411427  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:24.411436  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:24.411513  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:24.447536  188656 cri.go:89] found id: ""
	I0731 21:02:24.447565  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.447573  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:24.447578  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:24.447651  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:24.489270  188656 cri.go:89] found id: ""
	I0731 21:02:24.489301  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.489311  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:24.489320  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:24.489398  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:24.527891  188656 cri.go:89] found id: ""
	I0731 21:02:24.527922  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.527932  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:24.527938  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:24.527998  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:24.566854  188656 cri.go:89] found id: ""
	I0731 21:02:24.566886  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.566897  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:24.566904  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:24.566974  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:24.606234  188656 cri.go:89] found id: ""
	I0731 21:02:24.606267  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.606278  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:24.606285  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:24.606357  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:24.642880  188656 cri.go:89] found id: ""
	I0731 21:02:24.642909  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.642921  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:24.642929  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:24.642982  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:24.680069  188656 cri.go:89] found id: ""
	I0731 21:02:24.680101  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.680112  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:24.680124  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:24.680142  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:24.735337  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:24.735378  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:24.749010  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:24.749040  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:24.826406  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:24.826441  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:24.826458  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:24.906995  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:24.907049  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:27.451405  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:27.474178  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:27.474251  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:27.514912  188656 cri.go:89] found id: ""
	I0731 21:02:27.514938  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.514945  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:27.514951  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:27.515007  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:27.552850  188656 cri.go:89] found id: ""
	I0731 21:02:27.552880  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.552890  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:27.552896  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:27.552953  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:27.590468  188656 cri.go:89] found id: ""
	I0731 21:02:27.590496  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.590503  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:27.590509  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:27.590572  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:27.626295  188656 cri.go:89] found id: ""
	I0731 21:02:27.626322  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.626330  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:27.626339  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:27.626391  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:27.662654  188656 cri.go:89] found id: ""
	I0731 21:02:27.662690  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.662701  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:27.662708  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:27.662770  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:27.699528  188656 cri.go:89] found id: ""
	I0731 21:02:27.699558  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.699566  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:27.699572  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:27.699639  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:27.740501  188656 cri.go:89] found id: ""
	I0731 21:02:27.740528  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.740539  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:27.740547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:27.740613  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:27.778919  188656 cri.go:89] found id: ""
	I0731 21:02:27.778954  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.778966  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:27.778980  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:27.778999  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:27.815475  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:27.815500  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:27.866578  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:27.866615  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:27.880799  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:27.880830  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:27.948987  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:27.949014  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:27.949032  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:24.596180  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:27.088624  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:26.610897  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:29.110263  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:28.014828  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:30.514006  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:30.532314  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:30.546245  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:30.546317  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:30.581736  188656 cri.go:89] found id: ""
	I0731 21:02:30.581763  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.581772  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:30.581778  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:30.581837  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:30.618790  188656 cri.go:89] found id: ""
	I0731 21:02:30.618816  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.618824  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:30.618830  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:30.618886  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:30.654504  188656 cri.go:89] found id: ""
	I0731 21:02:30.654530  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.654538  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:30.654544  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:30.654603  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:30.690570  188656 cri.go:89] found id: ""
	I0731 21:02:30.690598  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.690609  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:30.690617  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:30.690683  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:30.739676  188656 cri.go:89] found id: ""
	I0731 21:02:30.739705  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.739715  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:30.739723  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:30.739789  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:30.777860  188656 cri.go:89] found id: ""
	I0731 21:02:30.777891  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.777902  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:30.777911  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:30.777995  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:30.814036  188656 cri.go:89] found id: ""
	I0731 21:02:30.814073  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.814088  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:30.814096  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:30.814168  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:30.847262  188656 cri.go:89] found id: ""
	I0731 21:02:30.847292  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.847304  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:30.847316  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:30.847338  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:30.898556  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:30.898596  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:30.912940  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:30.912974  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:30.987384  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:30.987405  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:30.987419  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:31.071376  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:31.071416  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:33.613677  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:33.628304  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:33.628380  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:33.662932  188656 cri.go:89] found id: ""
	I0731 21:02:33.662965  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.662977  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:33.662985  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:33.663055  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:33.697445  188656 cri.go:89] found id: ""
	I0731 21:02:33.697477  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.697487  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:33.697493  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:33.697553  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:33.734480  188656 cri.go:89] found id: ""
	I0731 21:02:33.734516  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.734527  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:33.734536  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:33.734614  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:33.770069  188656 cri.go:89] found id: ""
	I0731 21:02:33.770095  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.770104  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:33.770111  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:33.770194  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:33.806315  188656 cri.go:89] found id: ""
	I0731 21:02:33.806341  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.806350  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:33.806356  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:33.806408  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:29.592432  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:32.088842  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:34.089378  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:31.112420  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:33.611815  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:33.014022  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:35.014517  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:37.018514  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:33.842747  188656 cri.go:89] found id: ""
	I0731 21:02:33.842775  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.842782  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:33.842789  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:33.842856  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:33.877581  188656 cri.go:89] found id: ""
	I0731 21:02:33.877607  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.877616  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:33.877622  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:33.877682  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:33.913238  188656 cri.go:89] found id: ""
	I0731 21:02:33.913263  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.913271  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:33.913282  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:33.913298  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:33.967112  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:33.967148  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:33.980961  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:33.980994  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:34.054886  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:34.054917  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:34.054939  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:34.143088  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:34.143127  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:36.687110  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:36.700649  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:36.700725  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:36.737796  188656 cri.go:89] found id: ""
	I0731 21:02:36.737829  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.737841  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:36.737849  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:36.737916  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:36.773010  188656 cri.go:89] found id: ""
	I0731 21:02:36.773048  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.773059  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:36.773067  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:36.773136  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:36.813945  188656 cri.go:89] found id: ""
	I0731 21:02:36.813978  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.813988  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:36.813994  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:36.814047  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:36.849826  188656 cri.go:89] found id: ""
	I0731 21:02:36.849860  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.849872  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:36.849880  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:36.849943  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:36.887200  188656 cri.go:89] found id: ""
	I0731 21:02:36.887233  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.887244  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:36.887253  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:36.887391  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:36.922529  188656 cri.go:89] found id: ""
	I0731 21:02:36.922562  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.922573  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:36.922582  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:36.922644  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:36.958119  188656 cri.go:89] found id: ""
	I0731 21:02:36.958154  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.958166  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:36.958174  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:36.958240  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:37.001071  188656 cri.go:89] found id: ""
	I0731 21:02:37.001104  188656 logs.go:276] 0 containers: []
	W0731 21:02:37.001113  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:37.001123  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:37.001136  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:37.041248  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:37.041288  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:37.100519  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:37.100558  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:37.115157  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:37.115188  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:37.191232  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:37.191259  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:37.191277  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:36.588213  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:38.589224  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:36.109307  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:38.110675  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:40.111284  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:39.514052  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:42.013265  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:39.772834  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:39.788137  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:39.788203  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:39.827329  188656 cri.go:89] found id: ""
	I0731 21:02:39.827361  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.827371  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:39.827378  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:39.827458  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:39.864855  188656 cri.go:89] found id: ""
	I0731 21:02:39.864882  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.864889  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:39.864897  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:39.864958  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:39.901955  188656 cri.go:89] found id: ""
	I0731 21:02:39.901981  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.901990  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:39.901996  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:39.902059  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:39.941376  188656 cri.go:89] found id: ""
	I0731 21:02:39.941402  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.941412  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:39.941418  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:39.941473  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:39.975321  188656 cri.go:89] found id: ""
	I0731 21:02:39.975352  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.975364  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:39.975394  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:39.975465  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:40.010106  188656 cri.go:89] found id: ""
	I0731 21:02:40.010136  188656 logs.go:276] 0 containers: []
	W0731 21:02:40.010148  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:40.010157  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:40.010220  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:40.043963  188656 cri.go:89] found id: ""
	I0731 21:02:40.043997  188656 logs.go:276] 0 containers: []
	W0731 21:02:40.044009  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:40.044017  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:40.044089  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:40.079178  188656 cri.go:89] found id: ""
	I0731 21:02:40.079216  188656 logs.go:276] 0 containers: []
	W0731 21:02:40.079224  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:40.079234  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:40.079248  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:40.141115  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:40.141158  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:40.156722  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:40.156758  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:40.233758  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:40.233782  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:40.233797  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:40.317316  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:40.317375  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:42.858649  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:42.872135  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:42.872221  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:42.911966  188656 cri.go:89] found id: ""
	I0731 21:02:42.911998  188656 logs.go:276] 0 containers: []
	W0731 21:02:42.912007  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:42.912014  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:42.912081  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:42.950036  188656 cri.go:89] found id: ""
	I0731 21:02:42.950070  188656 logs.go:276] 0 containers: []
	W0731 21:02:42.950079  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:42.950085  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:42.950138  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:42.987201  188656 cri.go:89] found id: ""
	I0731 21:02:42.987233  188656 logs.go:276] 0 containers: []
	W0731 21:02:42.987245  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:42.987253  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:42.987326  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:43.027250  188656 cri.go:89] found id: ""
	I0731 21:02:43.027285  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.027297  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:43.027306  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:43.027374  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:43.063419  188656 cri.go:89] found id: ""
	I0731 21:02:43.063448  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.063456  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:43.063463  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:43.063527  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:43.101155  188656 cri.go:89] found id: ""
	I0731 21:02:43.101184  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.101193  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:43.101199  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:43.101249  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:43.142633  188656 cri.go:89] found id: ""
	I0731 21:02:43.142658  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.142667  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:43.142675  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:43.142741  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:43.177747  188656 cri.go:89] found id: ""
	I0731 21:02:43.177780  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.177789  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:43.177799  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:43.177813  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:43.228074  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:43.228114  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:43.242132  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:43.242165  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:43.313026  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:43.313054  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:43.313072  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:43.394620  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:43.394663  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:40.589306  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:42.589428  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:42.612236  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:45.110401  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:44.513370  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:46.514350  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:45.937932  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:45.951871  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:45.951964  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:45.987615  188656 cri.go:89] found id: ""
	I0731 21:02:45.987642  188656 logs.go:276] 0 containers: []
	W0731 21:02:45.987650  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:45.987656  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:45.987715  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:46.022632  188656 cri.go:89] found id: ""
	I0731 21:02:46.022659  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.022667  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:46.022674  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:46.022746  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:46.061153  188656 cri.go:89] found id: ""
	I0731 21:02:46.061182  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.061191  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:46.061196  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:46.061246  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:46.099168  188656 cri.go:89] found id: ""
	I0731 21:02:46.099197  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.099206  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:46.099212  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:46.099266  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:46.137269  188656 cri.go:89] found id: ""
	I0731 21:02:46.137300  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.137312  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:46.137321  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:46.137403  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:46.172330  188656 cri.go:89] found id: ""
	I0731 21:02:46.172391  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.172404  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:46.172417  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:46.172489  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:46.213314  188656 cri.go:89] found id: ""
	I0731 21:02:46.213358  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.213370  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:46.213378  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:46.213451  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:46.248663  188656 cri.go:89] found id: ""
	I0731 21:02:46.248697  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.248707  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:46.248719  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:46.248735  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:46.305433  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:46.305472  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:46.319065  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:46.319098  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:46.387025  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:46.387046  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:46.387058  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:46.476721  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:46.476769  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:44.589757  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:47.089954  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:47.112823  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:49.114163  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:49.014193  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:51.014760  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:49.020882  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:49.036502  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:49.036573  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:49.076478  188656 cri.go:89] found id: ""
	I0731 21:02:49.076509  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.076518  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:49.076525  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:49.076578  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:49.116065  188656 cri.go:89] found id: ""
	I0731 21:02:49.116098  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.116106  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:49.116112  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:49.116168  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:49.153237  188656 cri.go:89] found id: ""
	I0731 21:02:49.153274  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.153287  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:49.153295  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:49.153385  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:49.192821  188656 cri.go:89] found id: ""
	I0731 21:02:49.192849  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.192858  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:49.192864  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:49.192918  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:49.230627  188656 cri.go:89] found id: ""
	I0731 21:02:49.230660  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.230671  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:49.230679  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:49.230749  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:49.266575  188656 cri.go:89] found id: ""
	I0731 21:02:49.266603  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.266611  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:49.266617  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:49.266688  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:49.312489  188656 cri.go:89] found id: ""
	I0731 21:02:49.312522  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.312533  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:49.312541  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:49.312613  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:49.348907  188656 cri.go:89] found id: ""
	I0731 21:02:49.348932  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.348941  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:49.348950  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:49.348965  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:49.363229  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:49.363267  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:49.435708  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:49.435732  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:49.435745  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:49.522002  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:49.522047  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:49.566823  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:49.566868  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:52.122660  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:52.136559  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:52.136629  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:52.173198  188656 cri.go:89] found id: ""
	I0731 21:02:52.173227  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.173236  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:52.173242  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:52.173310  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:52.208464  188656 cri.go:89] found id: ""
	I0731 21:02:52.208503  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.208514  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:52.208521  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:52.208590  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:52.246052  188656 cri.go:89] found id: ""
	I0731 21:02:52.246084  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.246091  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:52.246098  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:52.246160  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:52.281798  188656 cri.go:89] found id: ""
	I0731 21:02:52.281831  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.281843  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:52.281852  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:52.281918  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:52.318924  188656 cri.go:89] found id: ""
	I0731 21:02:52.318954  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.318975  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:52.318983  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:52.319052  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:52.356752  188656 cri.go:89] found id: ""
	I0731 21:02:52.356788  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.356800  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:52.356809  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:52.356874  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:52.391507  188656 cri.go:89] found id: ""
	I0731 21:02:52.391537  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.391545  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:52.391551  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:52.391602  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:52.430714  188656 cri.go:89] found id: ""
	I0731 21:02:52.430749  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.430761  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:52.430774  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:52.430792  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:52.482600  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:52.482629  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:52.535317  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:52.535361  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:52.549835  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:52.549874  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:52.628319  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:52.628347  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:52.628365  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:49.590499  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:52.089170  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:54.089832  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:51.610237  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:54.112782  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:53.513932  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:55.516784  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:55.216678  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:55.231142  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:55.231225  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:55.266283  188656 cri.go:89] found id: ""
	I0731 21:02:55.266321  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.266334  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:55.266341  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:55.266399  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:55.301457  188656 cri.go:89] found id: ""
	I0731 21:02:55.301493  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.301506  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:55.301514  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:55.301574  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:55.338427  188656 cri.go:89] found id: ""
	I0731 21:02:55.338453  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.338461  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:55.338467  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:55.338521  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:55.373718  188656 cri.go:89] found id: ""
	I0731 21:02:55.373748  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.373757  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:55.373764  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:55.373846  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:55.410989  188656 cri.go:89] found id: ""
	I0731 21:02:55.411022  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.411034  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:55.411042  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:55.411100  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:55.452867  188656 cri.go:89] found id: ""
	I0731 21:02:55.452904  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.452915  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:55.452924  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:55.452995  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:55.512781  188656 cri.go:89] found id: ""
	I0731 21:02:55.512809  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.512821  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:55.512829  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:55.512894  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:55.550460  188656 cri.go:89] found id: ""
	I0731 21:02:55.550487  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.550495  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:55.550505  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:55.550521  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:55.625776  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:55.625804  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:55.625821  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:55.711276  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:55.711322  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:55.765078  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:55.765111  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:55.818131  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:55.818176  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:58.332914  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:58.346908  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:58.346992  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:58.383641  188656 cri.go:89] found id: ""
	I0731 21:02:58.383686  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.383695  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:58.383700  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:58.383753  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:58.419538  188656 cri.go:89] found id: ""
	I0731 21:02:58.419566  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.419576  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:58.419584  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:58.419649  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:58.457036  188656 cri.go:89] found id: ""
	I0731 21:02:58.457069  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.457080  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:58.457088  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:58.457162  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:58.497596  188656 cri.go:89] found id: ""
	I0731 21:02:58.497621  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.497629  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:58.497635  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:58.497706  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:58.538184  188656 cri.go:89] found id: ""
	I0731 21:02:58.538211  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.538220  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:58.538226  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:58.538291  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:58.584428  188656 cri.go:89] found id: ""
	I0731 21:02:58.584457  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.584468  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:58.584476  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:58.584537  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:58.625052  188656 cri.go:89] found id: ""
	I0731 21:02:58.625084  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.625096  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:58.625103  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:58.625171  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:58.662222  188656 cri.go:89] found id: ""
	I0731 21:02:58.662248  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.662256  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:58.662266  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:58.662278  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:58.740491  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:58.740530  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:58.782685  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:58.782714  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:58.833620  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:58.833668  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:56.091277  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:58.589516  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:56.609399  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:58.610957  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:58.013927  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:00.015179  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:58.848679  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:58.848713  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:58.925496  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:01.426171  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:01.440261  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:01.440341  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:01.477362  188656 cri.go:89] found id: ""
	I0731 21:03:01.477393  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.477405  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:01.477414  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:01.477483  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:01.516640  188656 cri.go:89] found id: ""
	I0731 21:03:01.516675  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.516692  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:01.516701  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:01.516764  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:01.560713  188656 cri.go:89] found id: ""
	I0731 21:03:01.560744  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.560756  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:01.560762  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:01.560844  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:01.604050  188656 cri.go:89] found id: ""
	I0731 21:03:01.604086  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.604097  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:01.604105  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:01.604170  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:01.641358  188656 cri.go:89] found id: ""
	I0731 21:03:01.641391  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.641401  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:01.641406  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:01.641471  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:01.677332  188656 cri.go:89] found id: ""
	I0731 21:03:01.677380  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.677390  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:01.677397  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:01.677459  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:01.713781  188656 cri.go:89] found id: ""
	I0731 21:03:01.713815  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.713826  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:01.713833  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:01.713914  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:01.757499  188656 cri.go:89] found id: ""
	I0731 21:03:01.757543  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.757552  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:01.757563  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:01.757575  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:01.832330  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:01.832370  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:01.832384  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:01.918996  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:01.919050  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:01.979268  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:01.979307  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:02.037528  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:02.037564  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:00.591373  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:03.089405  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:01.110471  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:03.611348  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:02.513998  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:05.015060  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:04.552758  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:04.566881  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:04.566960  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:04.604631  188656 cri.go:89] found id: ""
	I0731 21:03:04.604669  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.604680  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:04.604688  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:04.604791  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:04.644027  188656 cri.go:89] found id: ""
	I0731 21:03:04.644052  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.644061  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:04.644068  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:04.644134  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:04.680010  188656 cri.go:89] found id: ""
	I0731 21:03:04.680037  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.680045  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:04.680050  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:04.680102  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:04.717095  188656 cri.go:89] found id: ""
	I0731 21:03:04.717123  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.717133  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:04.717140  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:04.717212  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:04.755297  188656 cri.go:89] found id: ""
	I0731 21:03:04.755324  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.755331  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:04.755337  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:04.755387  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:04.792073  188656 cri.go:89] found id: ""
	I0731 21:03:04.792104  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.792113  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:04.792119  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:04.792168  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:04.828428  188656 cri.go:89] found id: ""
	I0731 21:03:04.828460  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.828468  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:04.828475  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:04.828541  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:04.863871  188656 cri.go:89] found id: ""
	I0731 21:03:04.863905  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.863916  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:04.863929  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:04.863946  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:04.879591  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:04.879626  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:04.962199  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:04.962227  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:04.962245  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:05.048502  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:05.048547  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:05.090812  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:05.090838  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:07.647307  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:07.664586  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:07.664656  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:07.719851  188656 cri.go:89] found id: ""
	I0731 21:03:07.719887  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.719899  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:07.719908  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:07.719978  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:07.778295  188656 cri.go:89] found id: ""
	I0731 21:03:07.778330  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.778343  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:07.778350  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:07.778417  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:07.817911  188656 cri.go:89] found id: ""
	I0731 21:03:07.817937  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.817947  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:07.817954  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:07.818004  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:07.853177  188656 cri.go:89] found id: ""
	I0731 21:03:07.853211  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.853222  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:07.853229  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:07.853308  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:07.888992  188656 cri.go:89] found id: ""
	I0731 21:03:07.889020  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.889046  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:07.889055  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:07.889133  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:07.924327  188656 cri.go:89] found id: ""
	I0731 21:03:07.924358  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.924369  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:07.924377  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:07.924461  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:07.964438  188656 cri.go:89] found id: ""
	I0731 21:03:07.964470  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.964480  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:07.964489  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:07.964572  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:08.003566  188656 cri.go:89] found id: ""
	I0731 21:03:08.003610  188656 logs.go:276] 0 containers: []
	W0731 21:03:08.003621  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:08.003634  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:08.003651  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:08.044246  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:08.044286  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:08.097479  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:08.097517  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:08.113636  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:08.113663  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:08.187217  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:08.187244  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:08.187261  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:05.090205  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:07.589488  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:06.110184  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:08.111598  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:10.611986  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:07.513036  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:09.513637  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:11.514176  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:10.771248  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:10.786159  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:10.786232  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:10.823724  188656 cri.go:89] found id: ""
	I0731 21:03:10.823756  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.823769  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:10.823777  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:10.823846  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:10.862440  188656 cri.go:89] found id: ""
	I0731 21:03:10.862468  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.862480  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:10.862488  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:10.862544  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:10.901499  188656 cri.go:89] found id: ""
	I0731 21:03:10.901527  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.901539  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:10.901547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:10.901611  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:10.940255  188656 cri.go:89] found id: ""
	I0731 21:03:10.940279  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.940287  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:10.940293  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:10.940356  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:10.975315  188656 cri.go:89] found id: ""
	I0731 21:03:10.975344  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.975353  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:10.975360  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:10.975420  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:11.011453  188656 cri.go:89] found id: ""
	I0731 21:03:11.011482  188656 logs.go:276] 0 containers: []
	W0731 21:03:11.011538  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:11.011549  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:11.011611  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:11.047846  188656 cri.go:89] found id: ""
	I0731 21:03:11.047887  188656 logs.go:276] 0 containers: []
	W0731 21:03:11.047899  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:11.047907  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:11.047972  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:11.086243  188656 cri.go:89] found id: ""
	I0731 21:03:11.086271  188656 logs.go:276] 0 containers: []
	W0731 21:03:11.086282  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:11.086293  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:11.086309  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:11.139390  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:11.139430  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:11.154637  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:11.154669  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:11.225996  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:11.226019  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:11.226035  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:11.305235  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:11.305280  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:09.589831  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:11.590312  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:14.089750  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:13.110191  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:15.112258  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:14.013609  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:16.014143  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:13.845792  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:13.859185  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:13.859261  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:13.896017  188656 cri.go:89] found id: ""
	I0731 21:03:13.896047  188656 logs.go:276] 0 containers: []
	W0731 21:03:13.896055  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:13.896061  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:13.896123  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:13.932442  188656 cri.go:89] found id: ""
	I0731 21:03:13.932475  188656 logs.go:276] 0 containers: []
	W0731 21:03:13.932486  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:13.932494  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:13.932564  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:13.971233  188656 cri.go:89] found id: ""
	I0731 21:03:13.971265  188656 logs.go:276] 0 containers: []
	W0731 21:03:13.971274  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:13.971280  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:13.971331  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:14.009757  188656 cri.go:89] found id: ""
	I0731 21:03:14.009787  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.009796  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:14.009805  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:14.009870  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:14.047946  188656 cri.go:89] found id: ""
	I0731 21:03:14.047979  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.047990  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:14.047998  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:14.048056  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:14.084687  188656 cri.go:89] found id: ""
	I0731 21:03:14.084720  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.084731  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:14.084739  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:14.084805  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:14.124831  188656 cri.go:89] found id: ""
	I0731 21:03:14.124861  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.124870  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:14.124876  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:14.124929  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:14.161242  188656 cri.go:89] found id: ""
	I0731 21:03:14.161275  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.161286  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:14.161295  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:14.161308  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:14.241060  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:14.241115  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:14.282382  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:14.282414  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:14.335201  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:14.335249  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:14.351345  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:14.351379  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:14.436524  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:16.937313  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:16.951403  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:16.951490  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:16.991735  188656 cri.go:89] found id: ""
	I0731 21:03:16.991766  188656 logs.go:276] 0 containers: []
	W0731 21:03:16.991777  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:16.991785  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:16.991852  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:17.030327  188656 cri.go:89] found id: ""
	I0731 21:03:17.030353  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.030360  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:17.030366  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:17.030419  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:17.068161  188656 cri.go:89] found id: ""
	I0731 21:03:17.068195  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.068206  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:17.068214  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:17.068286  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:17.105561  188656 cri.go:89] found id: ""
	I0731 21:03:17.105590  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.105601  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:17.105609  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:17.105684  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:17.144503  188656 cri.go:89] found id: ""
	I0731 21:03:17.144529  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.144540  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:17.144547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:17.144610  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:17.183709  188656 cri.go:89] found id: ""
	I0731 21:03:17.183738  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.183747  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:17.183753  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:17.183815  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:17.222083  188656 cri.go:89] found id: ""
	I0731 21:03:17.222109  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.222117  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:17.222124  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:17.222178  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:17.259503  188656 cri.go:89] found id: ""
	I0731 21:03:17.259534  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.259547  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:17.259561  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:17.259578  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:17.300603  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:17.300642  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:17.352194  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:17.352235  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:17.367179  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:17.367209  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:17.440051  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:17.440074  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:17.440088  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:16.589914  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:18.082985  188133 pod_ready.go:81] duration metric: took 4m0.000734125s for pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace to be "Ready" ...
	E0731 21:03:18.083015  188133 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0731 21:03:18.083039  188133 pod_ready.go:38] duration metric: took 4m12.543404692s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:03:18.083069  188133 kubeadm.go:597] duration metric: took 4m20.473129745s to restartPrimaryControlPlane
	W0731 21:03:18.083176  188133 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 21:03:18.083210  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:03:17.610274  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:19.611592  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:18.514266  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:20.514967  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:20.027644  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:20.041735  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:20.041826  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:20.077436  188656 cri.go:89] found id: ""
	I0731 21:03:20.077470  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.077483  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:20.077491  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:20.077558  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:20.117420  188656 cri.go:89] found id: ""
	I0731 21:03:20.117449  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.117459  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:20.117466  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:20.117533  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:20.157794  188656 cri.go:89] found id: ""
	I0731 21:03:20.157827  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.157838  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:20.157847  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:20.157914  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:20.193760  188656 cri.go:89] found id: ""
	I0731 21:03:20.193788  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.193796  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:20.193803  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:20.193856  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:20.231731  188656 cri.go:89] found id: ""
	I0731 21:03:20.231764  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.231777  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:20.231785  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:20.231856  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:20.268666  188656 cri.go:89] found id: ""
	I0731 21:03:20.268697  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.268709  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:20.268717  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:20.268786  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:20.304355  188656 cri.go:89] found id: ""
	I0731 21:03:20.304392  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.304406  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:20.304414  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:20.304478  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:20.343886  188656 cri.go:89] found id: ""
	I0731 21:03:20.343915  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.343927  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:20.343940  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:20.343957  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:20.358460  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:20.358494  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:20.435473  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:20.435499  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:20.435522  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:20.517961  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:20.518002  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:20.561528  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:20.561567  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:23.119570  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:23.134276  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:23.134366  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:23.172808  188656 cri.go:89] found id: ""
	I0731 21:03:23.172837  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.172846  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:23.172852  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:23.172914  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:23.208038  188656 cri.go:89] found id: ""
	I0731 21:03:23.208067  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.208080  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:23.208086  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:23.208140  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:23.244493  188656 cri.go:89] found id: ""
	I0731 21:03:23.244523  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.244533  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:23.244539  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:23.244605  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:23.280474  188656 cri.go:89] found id: ""
	I0731 21:03:23.280503  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.280510  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:23.280517  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:23.280581  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:23.317381  188656 cri.go:89] found id: ""
	I0731 21:03:23.317415  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.317428  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:23.317441  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:23.317511  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:23.357023  188656 cri.go:89] found id: ""
	I0731 21:03:23.357051  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.357062  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:23.357071  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:23.357134  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:23.400176  188656 cri.go:89] found id: ""
	I0731 21:03:23.400211  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.400223  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:23.400230  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:23.400298  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:23.440157  188656 cri.go:89] found id: ""
	I0731 21:03:23.440190  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.440201  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:23.440213  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:23.440234  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:23.494762  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:23.494802  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:23.511463  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:23.511510  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:23.600359  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:23.600383  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:23.600403  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:23.682683  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:23.682723  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:22.111495  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:24.112248  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:23.013460  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:25.014605  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:27.014900  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:26.225923  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:26.245708  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:26.245791  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:26.282882  188656 cri.go:89] found id: ""
	I0731 21:03:26.282910  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.282920  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:26.282928  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:26.282987  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:26.324227  188656 cri.go:89] found id: ""
	I0731 21:03:26.324268  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.324279  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:26.324287  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:26.324349  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:26.365996  188656 cri.go:89] found id: ""
	I0731 21:03:26.366027  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.366038  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:26.366047  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:26.366119  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:26.403790  188656 cri.go:89] found id: ""
	I0731 21:03:26.403823  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.403835  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:26.403844  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:26.403915  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:26.442924  188656 cri.go:89] found id: ""
	I0731 21:03:26.442947  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.442957  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:26.442964  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:26.443026  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:26.482260  188656 cri.go:89] found id: ""
	I0731 21:03:26.482286  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.482294  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:26.482300  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:26.482364  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:26.526385  188656 cri.go:89] found id: ""
	I0731 21:03:26.526420  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.526432  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:26.526442  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:26.526511  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:26.565217  188656 cri.go:89] found id: ""
	I0731 21:03:26.565250  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.565262  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:26.565275  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:26.565294  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:26.623437  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:26.623478  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:26.639642  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:26.639683  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:26.720274  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:26.720309  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:26.720325  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:26.799689  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:26.799728  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:26.111147  188266 pod_ready.go:81] duration metric: took 4m0.007359775s for pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace to be "Ready" ...
	E0731 21:03:26.111173  188266 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0731 21:03:26.111180  188266 pod_ready.go:38] duration metric: took 4m2.82978193s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:03:26.111195  188266 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:03:26.111220  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:26.111267  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:26.179210  188266 cri.go:89] found id: "89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:26.179240  188266 cri.go:89] found id: ""
	I0731 21:03:26.179251  188266 logs.go:276] 1 containers: [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718]
	I0731 21:03:26.179315  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.184349  188266 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:26.184430  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:26.221238  188266 cri.go:89] found id: "d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:26.221267  188266 cri.go:89] found id: ""
	I0731 21:03:26.221277  188266 logs.go:276] 1 containers: [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c]
	I0731 21:03:26.221349  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.225908  188266 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:26.225985  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:26.276864  188266 cri.go:89] found id: "987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:26.276895  188266 cri.go:89] found id: ""
	I0731 21:03:26.276907  188266 logs.go:276] 1 containers: [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025]
	I0731 21:03:26.276974  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.281921  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:26.282003  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:26.320868  188266 cri.go:89] found id: "936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:26.320903  188266 cri.go:89] found id: ""
	I0731 21:03:26.320914  188266 logs.go:276] 1 containers: [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447]
	I0731 21:03:26.320984  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.326203  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:26.326272  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:26.378409  188266 cri.go:89] found id: "c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:26.378433  188266 cri.go:89] found id: ""
	I0731 21:03:26.378442  188266 logs.go:276] 1 containers: [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e]
	I0731 21:03:26.378504  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.384006  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:26.384111  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:26.431113  188266 cri.go:89] found id: "c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:26.431147  188266 cri.go:89] found id: ""
	I0731 21:03:26.431158  188266 logs.go:276] 1 containers: [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085]
	I0731 21:03:26.431226  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.437136  188266 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:26.437213  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:26.484223  188266 cri.go:89] found id: ""
	I0731 21:03:26.484247  188266 logs.go:276] 0 containers: []
	W0731 21:03:26.484257  188266 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:26.484263  188266 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:03:26.484319  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:03:26.530433  188266 cri.go:89] found id: "701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:26.530470  188266 cri.go:89] found id: "23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:26.530476  188266 cri.go:89] found id: ""
	I0731 21:03:26.530486  188266 logs.go:276] 2 containers: [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f]
	I0731 21:03:26.530551  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.535747  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.541379  188266 logs.go:123] Gathering logs for storage-provisioner [23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f] ...
	I0731 21:03:26.541406  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:26.586730  188266 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:26.586769  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:27.133617  188266 logs.go:123] Gathering logs for kube-apiserver [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718] ...
	I0731 21:03:27.133672  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:27.183805  188266 logs.go:123] Gathering logs for etcd [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c] ...
	I0731 21:03:27.183846  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:27.226579  188266 logs.go:123] Gathering logs for kube-proxy [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e] ...
	I0731 21:03:27.226620  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:27.290635  188266 logs.go:123] Gathering logs for coredns [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025] ...
	I0731 21:03:27.290671  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:27.330700  188266 logs.go:123] Gathering logs for kube-scheduler [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447] ...
	I0731 21:03:27.330732  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:27.370882  188266 logs.go:123] Gathering logs for kube-controller-manager [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085] ...
	I0731 21:03:27.370918  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:27.426426  188266 logs.go:123] Gathering logs for storage-provisioner [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173] ...
	I0731 21:03:27.426471  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:27.466359  188266 logs.go:123] Gathering logs for container status ...
	I0731 21:03:27.466396  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:27.515202  188266 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:27.515235  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:27.569081  188266 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:27.569122  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:27.586776  188266 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:27.586809  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:03:30.223314  188266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:30.241046  188266 api_server.go:72] duration metric: took 4m14.179869513s to wait for apiserver process to appear ...
	I0731 21:03:30.241073  188266 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:03:30.241118  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:30.241188  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:30.281267  188266 cri.go:89] found id: "89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:30.281303  188266 cri.go:89] found id: ""
	I0731 21:03:30.281314  188266 logs.go:276] 1 containers: [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718]
	I0731 21:03:30.281397  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.285857  188266 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:30.285927  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:30.321742  188266 cri.go:89] found id: "d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:30.321770  188266 cri.go:89] found id: ""
	I0731 21:03:30.321779  188266 logs.go:276] 1 containers: [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c]
	I0731 21:03:30.321841  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.326210  188266 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:30.326284  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:30.367998  188266 cri.go:89] found id: "987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:30.368025  188266 cri.go:89] found id: ""
	I0731 21:03:30.368036  188266 logs.go:276] 1 containers: [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025]
	I0731 21:03:30.368101  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.372340  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:30.372412  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:30.413689  188266 cri.go:89] found id: "936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:30.413714  188266 cri.go:89] found id: ""
	I0731 21:03:30.413725  188266 logs.go:276] 1 containers: [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447]
	I0731 21:03:30.413789  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.418525  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:30.418604  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:30.458505  188266 cri.go:89] found id: "c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:30.458530  188266 cri.go:89] found id: ""
	I0731 21:03:30.458539  188266 logs.go:276] 1 containers: [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e]
	I0731 21:03:30.458587  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.462993  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:30.463058  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:30.500683  188266 cri.go:89] found id: "c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:30.500711  188266 cri.go:89] found id: ""
	I0731 21:03:30.500722  188266 logs.go:276] 1 containers: [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085]
	I0731 21:03:30.500785  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.506197  188266 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:30.506277  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:30.545243  188266 cri.go:89] found id: ""
	I0731 21:03:30.545273  188266 logs.go:276] 0 containers: []
	W0731 21:03:30.545284  188266 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:30.545290  188266 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:03:30.545371  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:03:30.588405  188266 cri.go:89] found id: "701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:30.588459  188266 cri.go:89] found id: "23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:30.588465  188266 cri.go:89] found id: ""
	I0731 21:03:30.588474  188266 logs.go:276] 2 containers: [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f]
	I0731 21:03:30.588539  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.593611  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.599345  188266 logs.go:123] Gathering logs for kube-proxy [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e] ...
	I0731 21:03:30.599386  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:30.641530  188266 logs.go:123] Gathering logs for kube-controller-manager [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085] ...
	I0731 21:03:30.641564  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:30.703655  188266 logs.go:123] Gathering logs for storage-provisioner [23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f] ...
	I0731 21:03:30.703692  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:30.744119  188266 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:30.744147  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:29.515238  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:32.014503  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:29.351214  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:29.365487  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:29.365561  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:29.402989  188656 cri.go:89] found id: ""
	I0731 21:03:29.403015  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.403022  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:29.403028  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:29.403079  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:29.443276  188656 cri.go:89] found id: ""
	I0731 21:03:29.443310  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.443321  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:29.443329  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:29.443397  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:29.483285  188656 cri.go:89] found id: ""
	I0731 21:03:29.483311  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.483319  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:29.483326  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:29.483384  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:29.522285  188656 cri.go:89] found id: ""
	I0731 21:03:29.522317  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.522329  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:29.522337  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:29.522406  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:29.565115  188656 cri.go:89] found id: ""
	I0731 21:03:29.565145  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.565155  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:29.565163  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:29.565233  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:29.603768  188656 cri.go:89] found id: ""
	I0731 21:03:29.603805  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.603816  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:29.603822  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:29.603875  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:29.640380  188656 cri.go:89] found id: ""
	I0731 21:03:29.640406  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.640416  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:29.640424  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:29.640493  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:29.679699  188656 cri.go:89] found id: ""
	I0731 21:03:29.679727  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.679736  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:29.679749  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:29.679764  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:29.735555  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:29.735603  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:29.749670  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:29.749708  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:29.825950  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:29.825973  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:29.825989  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:29.915420  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:29.915463  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:32.462996  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:32.478659  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:32.478739  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:32.528625  188656 cri.go:89] found id: ""
	I0731 21:03:32.528651  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.528659  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:32.528665  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:32.528724  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:32.574371  188656 cri.go:89] found id: ""
	I0731 21:03:32.574399  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.574408  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:32.574414  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:32.574474  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:32.616916  188656 cri.go:89] found id: ""
	I0731 21:03:32.616960  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.616970  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:32.616975  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:32.617040  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:32.657725  188656 cri.go:89] found id: ""
	I0731 21:03:32.657758  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.657769  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:32.657777  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:32.657842  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:32.693197  188656 cri.go:89] found id: ""
	I0731 21:03:32.693226  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.693237  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:32.693245  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:32.693316  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:32.733567  188656 cri.go:89] found id: ""
	I0731 21:03:32.733594  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.733602  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:32.733608  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:32.733670  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:32.774624  188656 cri.go:89] found id: ""
	I0731 21:03:32.774659  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.774671  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:32.774679  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:32.774747  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:32.811755  188656 cri.go:89] found id: ""
	I0731 21:03:32.811790  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.811809  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:32.811822  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:32.811835  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:32.825512  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:32.825544  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:32.902310  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:32.902339  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:32.902366  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:32.983347  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:32.983391  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:33.028037  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:33.028068  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:31.165988  188266 logs.go:123] Gathering logs for container status ...
	I0731 21:03:31.166042  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:31.209564  188266 logs.go:123] Gathering logs for kube-apiserver [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718] ...
	I0731 21:03:31.209605  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:31.254061  188266 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:31.254105  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:31.269227  188266 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:31.269266  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:03:31.394442  188266 logs.go:123] Gathering logs for etcd [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c] ...
	I0731 21:03:31.394477  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:31.439011  188266 logs.go:123] Gathering logs for coredns [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025] ...
	I0731 21:03:31.439047  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:31.476798  188266 logs.go:123] Gathering logs for kube-scheduler [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447] ...
	I0731 21:03:31.476825  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:31.524460  188266 logs.go:123] Gathering logs for storage-provisioner [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173] ...
	I0731 21:03:31.524491  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:31.564254  188266 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:31.564288  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:34.122836  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 21:03:34.128516  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 200:
	ok
	I0731 21:03:34.129484  188266 api_server.go:141] control plane version: v1.30.3
	I0731 21:03:34.129513  188266 api_server.go:131] duration metric: took 3.888432526s to wait for apiserver health ...
	I0731 21:03:34.129523  188266 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:03:34.129554  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:34.129622  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:34.167751  188266 cri.go:89] found id: "89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:34.167781  188266 cri.go:89] found id: ""
	I0731 21:03:34.167792  188266 logs.go:276] 1 containers: [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718]
	I0731 21:03:34.167860  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.172786  188266 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:34.172858  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:34.212172  188266 cri.go:89] found id: "d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:34.212204  188266 cri.go:89] found id: ""
	I0731 21:03:34.212215  188266 logs.go:276] 1 containers: [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c]
	I0731 21:03:34.212289  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.216651  188266 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:34.216736  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:34.263492  188266 cri.go:89] found id: "987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:34.263515  188266 cri.go:89] found id: ""
	I0731 21:03:34.263528  188266 logs.go:276] 1 containers: [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025]
	I0731 21:03:34.263592  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.268548  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:34.268630  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:34.309420  188266 cri.go:89] found id: "936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:34.309453  188266 cri.go:89] found id: ""
	I0731 21:03:34.309463  188266 logs.go:276] 1 containers: [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447]
	I0731 21:03:34.309529  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.313921  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:34.313993  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:34.354712  188266 cri.go:89] found id: "c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:34.354740  188266 cri.go:89] found id: ""
	I0731 21:03:34.354754  188266 logs.go:276] 1 containers: [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e]
	I0731 21:03:34.354818  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.359363  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:34.359446  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:34.397596  188266 cri.go:89] found id: "c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:34.397622  188266 cri.go:89] found id: ""
	I0731 21:03:34.397634  188266 logs.go:276] 1 containers: [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085]
	I0731 21:03:34.397710  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.402126  188266 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:34.402207  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:34.447198  188266 cri.go:89] found id: ""
	I0731 21:03:34.447234  188266 logs.go:276] 0 containers: []
	W0731 21:03:34.447242  188266 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:34.447248  188266 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:03:34.447304  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:03:34.487429  188266 cri.go:89] found id: "701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:34.487452  188266 cri.go:89] found id: "23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:34.487457  188266 cri.go:89] found id: ""
	I0731 21:03:34.487464  188266 logs.go:276] 2 containers: [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f]
	I0731 21:03:34.487519  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.494362  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.499409  188266 logs.go:123] Gathering logs for etcd [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c] ...
	I0731 21:03:34.499438  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:34.549761  188266 logs.go:123] Gathering logs for kube-proxy [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e] ...
	I0731 21:03:34.549802  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:34.588571  188266 logs.go:123] Gathering logs for kube-controller-manager [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085] ...
	I0731 21:03:34.588603  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:34.646590  188266 logs.go:123] Gathering logs for storage-provisioner [23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f] ...
	I0731 21:03:34.646635  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:34.691320  188266 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:34.691353  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:35.098975  188266 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:35.099018  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:35.153924  188266 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:35.153964  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:35.168091  188266 logs.go:123] Gathering logs for kube-apiserver [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718] ...
	I0731 21:03:35.168121  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:35.214469  188266 logs.go:123] Gathering logs for container status ...
	I0731 21:03:35.214511  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:35.260694  188266 logs.go:123] Gathering logs for storage-provisioner [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173] ...
	I0731 21:03:35.260724  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:35.299230  188266 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:35.299261  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:03:35.413598  188266 logs.go:123] Gathering logs for coredns [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025] ...
	I0731 21:03:35.413635  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:35.451331  188266 logs.go:123] Gathering logs for kube-scheduler [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447] ...
	I0731 21:03:35.451359  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:35.582896  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:35.597483  188656 kubeadm.go:597] duration metric: took 4m3.860422558s to restartPrimaryControlPlane
	W0731 21:03:35.597559  188656 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 21:03:35.597598  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:03:36.054326  188656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:03:36.070199  188656 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:03:36.081882  188656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:03:36.093300  188656 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:03:36.093322  188656 kubeadm.go:157] found existing configuration files:
	
	I0731 21:03:36.093396  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:03:36.103781  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:03:36.103843  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:03:36.114702  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:03:36.125213  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:03:36.125299  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:03:36.136299  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:03:36.146441  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:03:36.146520  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:03:36.157524  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:03:36.168247  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:03:36.168327  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:03:36.178875  188656 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:03:36.253662  188656 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 21:03:36.253804  188656 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:03:36.401385  188656 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:03:36.401550  188656 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:03:36.401686  188656 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:03:36.591601  188656 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:03:34.513632  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:36.515043  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:36.593492  188656 out.go:204]   - Generating certificates and keys ...
	I0731 21:03:36.593604  188656 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:03:36.593690  188656 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:03:36.593817  188656 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:03:36.593907  188656 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:03:36.594011  188656 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:03:36.594090  188656 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:03:36.594215  188656 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:03:36.594602  188656 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:03:36.595122  188656 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:03:36.595323  188656 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:03:36.595414  188656 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:03:36.595548  188656 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:03:37.052958  188656 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:03:37.178980  188656 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:03:37.375085  188656 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:03:37.550735  188656 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:03:37.571991  188656 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:03:37.575050  188656 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:03:37.575227  188656 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:03:37.707194  188656 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:03:37.997696  188266 system_pods.go:59] 8 kube-system pods found
	I0731 21:03:37.997725  188266 system_pods.go:61] "coredns-7db6d8ff4d-gnrgs" [203ddf96-11cf-4fd3-8920-aa787815ad1a] Running
	I0731 21:03:37.997730  188266 system_pods.go:61] "etcd-default-k8s-diff-port-125614" [7b9a74f5-b8df-457d-b4be-26e3f268e74a] Running
	I0731 21:03:37.997734  188266 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-125614" [a2db9a6a-1d7f-4bb6-9f54-2d74676bba7f] Running
	I0731 21:03:37.997738  188266 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-125614" [3cd52038-e450-46ea-91a7-494bdfeb386e] Running
	I0731 21:03:37.997741  188266 system_pods.go:61] "kube-proxy-csdc4" [24077c7d-f54c-4a54-9791-742327f2a9d0] Running
	I0731 21:03:37.997744  188266 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-125614" [4b9b4f87-74a3-4768-b3ca-3226a4db7105] Running
	I0731 21:03:37.997750  188266 system_pods.go:61] "metrics-server-569cc877fc-jf52w" [00b07830-8180-43c0-83c7-e68d399ae0ef] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:03:37.997754  188266 system_pods.go:61] "storage-provisioner" [efc60c19-af1b-426e-82e2-5fb9a2d1fb3a] Running
	I0731 21:03:37.997762  188266 system_pods.go:74] duration metric: took 3.868231958s to wait for pod list to return data ...
	I0731 21:03:37.997773  188266 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:03:38.000640  188266 default_sa.go:45] found service account: "default"
	I0731 21:03:38.000665  188266 default_sa.go:55] duration metric: took 2.88647ms for default service account to be created ...
	I0731 21:03:38.000672  188266 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:03:38.007107  188266 system_pods.go:86] 8 kube-system pods found
	I0731 21:03:38.007132  188266 system_pods.go:89] "coredns-7db6d8ff4d-gnrgs" [203ddf96-11cf-4fd3-8920-aa787815ad1a] Running
	I0731 21:03:38.007137  188266 system_pods.go:89] "etcd-default-k8s-diff-port-125614" [7b9a74f5-b8df-457d-b4be-26e3f268e74a] Running
	I0731 21:03:38.007142  188266 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-125614" [a2db9a6a-1d7f-4bb6-9f54-2d74676bba7f] Running
	I0731 21:03:38.007146  188266 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-125614" [3cd52038-e450-46ea-91a7-494bdfeb386e] Running
	I0731 21:03:38.007152  188266 system_pods.go:89] "kube-proxy-csdc4" [24077c7d-f54c-4a54-9791-742327f2a9d0] Running
	I0731 21:03:38.007158  188266 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-125614" [4b9b4f87-74a3-4768-b3ca-3226a4db7105] Running
	I0731 21:03:38.007164  188266 system_pods.go:89] "metrics-server-569cc877fc-jf52w" [00b07830-8180-43c0-83c7-e68d399ae0ef] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:03:38.007168  188266 system_pods.go:89] "storage-provisioner" [efc60c19-af1b-426e-82e2-5fb9a2d1fb3a] Running
	I0731 21:03:38.007175  188266 system_pods.go:126] duration metric: took 6.498733ms to wait for k8s-apps to be running ...
	I0731 21:03:38.007183  188266 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:03:38.007240  188266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:03:38.026906  188266 system_svc.go:56] duration metric: took 19.708653ms WaitForService to wait for kubelet
	I0731 21:03:38.026938  188266 kubeadm.go:582] duration metric: took 4m21.965767608s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:03:38.026969  188266 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:03:38.030479  188266 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:03:38.030554  188266 node_conditions.go:123] node cpu capacity is 2
	I0731 21:03:38.030577  188266 node_conditions.go:105] duration metric: took 3.601933ms to run NodePressure ...
	I0731 21:03:38.030600  188266 start.go:241] waiting for startup goroutines ...
	I0731 21:03:38.030611  188266 start.go:246] waiting for cluster config update ...
	I0731 21:03:38.030626  188266 start.go:255] writing updated cluster config ...
	I0731 21:03:38.031028  188266 ssh_runner.go:195] Run: rm -f paused
	I0731 21:03:38.082629  188266 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 21:03:38.084590  188266 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-125614" cluster and "default" namespace by default
	I0731 21:03:37.709295  188656 out.go:204]   - Booting up control plane ...
	I0731 21:03:37.709427  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:03:37.722549  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:03:37.723455  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:03:37.724194  188656 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:03:37.726323  188656 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 21:03:39.013773  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:41.016158  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:44.360883  188133 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.27764632s)
	I0731 21:03:44.360955  188133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:03:44.379069  188133 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:03:44.389518  188133 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:03:44.400223  188133 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:03:44.400250  188133 kubeadm.go:157] found existing configuration files:
	
	I0731 21:03:44.400302  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:03:44.410644  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:03:44.410718  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:03:44.421136  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:03:44.431161  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:03:44.431231  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:03:44.441936  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:03:44.451761  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:03:44.451820  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:03:44.462692  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:03:44.472982  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:03:44.473050  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:03:44.482980  188133 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:03:44.532539  188133 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0731 21:03:44.532637  188133 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:03:44.651505  188133 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:03:44.651654  188133 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:03:44.651772  188133 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0731 21:03:44.660564  188133 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:03:44.662559  188133 out.go:204]   - Generating certificates and keys ...
	I0731 21:03:44.662676  188133 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:03:44.662765  188133 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:03:44.662878  188133 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:03:44.662971  188133 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:03:44.663073  188133 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:03:44.663142  188133 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:03:44.663218  188133 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:03:44.663293  188133 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:03:44.663389  188133 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:03:44.663527  188133 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:03:44.663587  188133 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:03:44.663679  188133 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:03:44.813556  188133 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:03:44.908380  188133 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 21:03:45.005215  188133 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:03:45.138446  188133 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:03:45.222892  188133 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:03:45.223622  188133 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:03:45.226748  188133 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:03:43.513039  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:45.513901  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:45.228799  188133 out.go:204]   - Booting up control plane ...
	I0731 21:03:45.228934  188133 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:03:45.229087  188133 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:03:45.230021  188133 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:03:45.249145  188133 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:03:45.258184  188133 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:03:45.258267  188133 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:03:45.392726  188133 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 21:03:45.392852  188133 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 21:03:45.899754  188133 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 506.694095ms
	I0731 21:03:45.899857  188133 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 21:03:51.901713  188133 kubeadm.go:310] [api-check] The API server is healthy after 6.00194457s
	I0731 21:03:51.914947  188133 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 21:03:51.932510  188133 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 21:03:51.971055  188133 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 21:03:51.971273  188133 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-916885 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 21:03:51.985104  188133 kubeadm.go:310] [bootstrap-token] Using token: q86dx8.9ipyjyidvcwogxce
	I0731 21:03:47.515248  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:50.016206  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:51.986447  188133 out.go:204]   - Configuring RBAC rules ...
	I0731 21:03:51.986576  188133 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 21:03:51.993910  188133 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 21:03:52.002474  188133 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 21:03:52.007035  188133 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 21:03:52.011708  188133 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 21:03:52.020500  188133 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 21:03:52.310057  188133 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 21:03:52.778266  188133 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 21:03:53.308425  188133 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 21:03:53.309509  188133 kubeadm.go:310] 
	I0731 21:03:53.309585  188133 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 21:03:53.309597  188133 kubeadm.go:310] 
	I0731 21:03:53.309686  188133 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 21:03:53.309694  188133 kubeadm.go:310] 
	I0731 21:03:53.309715  188133 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 21:03:53.309771  188133 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 21:03:53.309875  188133 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 21:03:53.309894  188133 kubeadm.go:310] 
	I0731 21:03:53.310007  188133 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 21:03:53.310027  188133 kubeadm.go:310] 
	I0731 21:03:53.310088  188133 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 21:03:53.310099  188133 kubeadm.go:310] 
	I0731 21:03:53.310164  188133 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 21:03:53.310275  188133 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 21:03:53.310371  188133 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 21:03:53.310396  188133 kubeadm.go:310] 
	I0731 21:03:53.310499  188133 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 21:03:53.310601  188133 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 21:03:53.310611  188133 kubeadm.go:310] 
	I0731 21:03:53.310735  188133 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token q86dx8.9ipyjyidvcwogxce \
	I0731 21:03:53.310910  188133 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:227a67c5218b331dd5f7132ea28d97c9a8049ca33dc9adbb2077964e102f3559 \
	I0731 21:03:53.310961  188133 kubeadm.go:310] 	--control-plane 
	I0731 21:03:53.310970  188133 kubeadm.go:310] 
	I0731 21:03:53.311078  188133 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 21:03:53.311092  188133 kubeadm.go:310] 
	I0731 21:03:53.311222  188133 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token q86dx8.9ipyjyidvcwogxce \
	I0731 21:03:53.311402  188133 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:227a67c5218b331dd5f7132ea28d97c9a8049ca33dc9adbb2077964e102f3559 
	I0731 21:03:53.312409  188133 kubeadm.go:310] W0731 21:03:44.497219    2941 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0731 21:03:53.312703  188133 kubeadm.go:310] W0731 21:03:44.498106    2941 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0731 21:03:53.312811  188133 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:03:53.312857  188133 cni.go:84] Creating CNI manager for ""
	I0731 21:03:53.312870  188133 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:03:53.315035  188133 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:03:53.316406  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:03:53.327870  188133 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:03:53.352757  188133 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:03:53.352902  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:53.352919  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-916885 minikube.k8s.io/updated_at=2024_07_31T21_03_53_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825 minikube.k8s.io/name=no-preload-916885 minikube.k8s.io/primary=true
	I0731 21:03:53.403275  188133 ops.go:34] apiserver oom_adj: -16
	I0731 21:03:53.579520  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:54.080457  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:54.579898  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:55.080464  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:55.580211  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:56.080518  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:56.579806  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:57.080302  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:57.181987  188133 kubeadm.go:1113] duration metric: took 3.829153755s to wait for elevateKubeSystemPrivileges
	I0731 21:03:57.182024  188133 kubeadm.go:394] duration metric: took 4m59.623631766s to StartCluster
	I0731 21:03:57.182051  188133 settings.go:142] acquiring lock: {Name:mk1f43113b3e416d694056e4890ac819b020378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:03:57.182160  188133 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 21:03:57.185297  188133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:03:57.185586  188133 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.239 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:03:57.185672  188133 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:03:57.185753  188133 addons.go:69] Setting storage-provisioner=true in profile "no-preload-916885"
	I0731 21:03:57.185776  188133 addons.go:69] Setting default-storageclass=true in profile "no-preload-916885"
	I0731 21:03:57.185797  188133 addons.go:69] Setting metrics-server=true in profile "no-preload-916885"
	I0731 21:03:57.185825  188133 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-916885"
	I0731 21:03:57.185844  188133 addons.go:234] Setting addon metrics-server=true in "no-preload-916885"
	W0731 21:03:57.185856  188133 addons.go:243] addon metrics-server should already be in state true
	I0731 21:03:57.185864  188133 config.go:182] Loaded profile config "no-preload-916885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 21:03:57.185889  188133 host.go:66] Checking if "no-preload-916885" exists ...
	I0731 21:03:57.185785  188133 addons.go:234] Setting addon storage-provisioner=true in "no-preload-916885"
	W0731 21:03:57.185926  188133 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:03:57.185956  188133 host.go:66] Checking if "no-preload-916885" exists ...
	I0731 21:03:57.186201  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.186226  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.186247  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.186279  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.186301  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.186345  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.187280  188133 out.go:177] * Verifying Kubernetes components...
	I0731 21:03:57.188864  188133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:03:57.202393  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35433
	I0731 21:03:57.202431  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41921
	I0731 21:03:57.202856  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.202946  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.203416  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.203434  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.203688  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.203707  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.203829  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.204081  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.204270  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetState
	I0731 21:03:57.204428  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.204462  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.204960  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39305
	I0731 21:03:57.205722  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.206275  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.206291  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.208245  188133 addons.go:234] Setting addon default-storageclass=true in "no-preload-916885"
	W0731 21:03:57.208264  188133 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:03:57.208296  188133 host.go:66] Checking if "no-preload-916885" exists ...
	I0731 21:03:57.208640  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.208663  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.208866  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.209432  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.209458  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.222235  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44595
	I0731 21:03:57.222835  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.223408  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.223429  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.224137  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.224366  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetState
	I0731 21:03:57.226564  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 21:03:57.227398  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
	I0731 21:03:57.227842  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.228377  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.228399  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.228427  188133 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:03:57.228836  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.229521  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.229573  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.230036  188133 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:03:57.230056  188133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:03:57.230075  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 21:03:57.230207  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46451
	I0731 21:03:57.230601  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.230993  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.231008  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.231323  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.231519  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetState
	I0731 21:03:57.233542  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 21:03:57.235239  188133 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 21:03:52.514632  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:55.014017  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:57.235631  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.236081  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 21:03:57.236105  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.236374  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 21:03:57.236478  188133 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:03:57.236493  188133 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:03:57.236510  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 21:03:57.236545  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 21:03:57.236711  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 21:03:57.236824  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 21:03:57.238988  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.239335  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 21:03:57.239361  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.239482  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 21:03:57.239645  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 21:03:57.239775  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 21:03:57.239902  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 21:03:57.252386  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40865
	I0731 21:03:57.252846  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.253454  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.253474  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.253837  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.254048  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetState
	I0731 21:03:57.255784  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 21:03:57.256020  188133 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:03:57.256037  188133 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:03:57.256057  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 21:03:57.258870  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.259220  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 21:03:57.259254  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.259446  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 21:03:57.259612  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 21:03:57.259783  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 21:03:57.259940  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 21:03:57.405243  188133 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:03:57.426852  188133 node_ready.go:35] waiting up to 6m0s for node "no-preload-916885" to be "Ready" ...
	I0731 21:03:57.494325  188133 node_ready.go:49] node "no-preload-916885" has status "Ready":"True"
	I0731 21:03:57.494352  188133 node_ready.go:38] duration metric: took 67.471516ms for node "no-preload-916885" to be "Ready" ...
	I0731 21:03:57.494365  188133 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:03:57.497819  188133 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:03:57.497849  188133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 21:03:57.528118  188133 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:03:57.528148  188133 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:03:57.557889  188133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:03:57.568872  188133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:03:57.583099  188133 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-9qnjq" in "kube-system" namespace to be "Ready" ...
	I0731 21:03:57.587315  188133 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:03:57.587342  188133 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:03:57.645504  188133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:03:58.515635  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.515650  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.515667  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.515675  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.516054  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.516100  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.516117  188133 main.go:141] libmachine: (no-preload-916885) DBG | Closing plugin on server side
	I0731 21:03:58.516128  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.516128  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.516161  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.516187  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.516141  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.516213  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.516097  188133 main.go:141] libmachine: (no-preload-916885) DBG | Closing plugin on server side
	I0731 21:03:58.516431  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.516444  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.517889  188133 main.go:141] libmachine: (no-preload-916885) DBG | Closing plugin on server side
	I0731 21:03:58.517914  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.517930  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.569097  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.569120  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.569463  188133 main.go:141] libmachine: (no-preload-916885) DBG | Closing plugin on server side
	I0731 21:03:58.569511  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.569520  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.726076  188133 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.080526254s)
	I0731 21:03:58.726140  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.726153  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.726469  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.726490  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.726501  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.726514  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.728603  188133 main.go:141] libmachine: (no-preload-916885) DBG | Closing plugin on server side
	I0731 21:03:58.728666  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.728688  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.728715  188133 addons.go:475] Verifying addon metrics-server=true in "no-preload-916885"
	I0731 21:03:58.730520  188133 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 21:03:58.731823  188133 addons.go:510] duration metric: took 1.546157188s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 21:03:57.515366  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:59.515730  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:02.013803  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:59.593082  188133 pod_ready.go:102] pod "coredns-5cfdc65f69-9qnjq" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:00.589165  188133 pod_ready.go:92] pod "coredns-5cfdc65f69-9qnjq" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:00.589192  188133 pod_ready.go:81] duration metric: took 3.00606369s for pod "coredns-5cfdc65f69-9qnjq" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:00.589204  188133 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-bqgfg" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:02.597316  188133 pod_ready.go:102] pod "coredns-5cfdc65f69-bqgfg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:05.096168  188133 pod_ready.go:102] pod "coredns-5cfdc65f69-bqgfg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:05.597832  188133 pod_ready.go:92] pod "coredns-5cfdc65f69-bqgfg" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.597857  188133 pod_ready.go:81] duration metric: took 5.008646335s for pod "coredns-5cfdc65f69-bqgfg" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.597866  188133 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.603105  188133 pod_ready.go:92] pod "etcd-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.603128  188133 pod_ready.go:81] duration metric: took 5.254251ms for pod "etcd-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.603140  188133 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.610748  188133 pod_ready.go:92] pod "kube-apiserver-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.610771  188133 pod_ready.go:81] duration metric: took 7.623438ms for pod "kube-apiserver-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.610782  188133 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.615949  188133 pod_ready.go:92] pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.615966  188133 pod_ready.go:81] duration metric: took 5.176213ms for pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.615975  188133 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b4h2z" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.620431  188133 pod_ready.go:92] pod "kube-proxy-b4h2z" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.620450  188133 pod_ready.go:81] duration metric: took 4.469258ms for pod "kube-proxy-b4h2z" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.620458  188133 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.993080  188133 pod_ready.go:92] pod "kube-scheduler-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.993104  188133 pod_ready.go:81] duration metric: took 372.640001ms for pod "kube-scheduler-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.993112  188133 pod_ready.go:38] duration metric: took 8.498733061s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:04:05.993125  188133 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:04:05.993186  188133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:04:06.009952  188133 api_server.go:72] duration metric: took 8.824325154s to wait for apiserver process to appear ...
	I0731 21:04:06.009981  188133 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:04:06.010001  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 21:04:06.014715  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 200:
	ok
	I0731 21:04:06.015917  188133 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 21:04:06.015944  188133 api_server.go:131] duration metric: took 5.952931ms to wait for apiserver health ...
	I0731 21:04:06.015954  188133 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:04:06.196874  188133 system_pods.go:59] 9 kube-system pods found
	I0731 21:04:06.196907  188133 system_pods.go:61] "coredns-5cfdc65f69-9qnjq" [2350f15d-0e3d-429f-a21f-8cbd41407d7e] Running
	I0731 21:04:06.196914  188133 system_pods.go:61] "coredns-5cfdc65f69-bqgfg" [9010990b-36d5-4c0d-adc9-5d9483bd5d44] Running
	I0731 21:04:06.196918  188133 system_pods.go:61] "etcd-no-preload-916885" [951e730b-b153-4f75-9f7f-82d774e01853] Running
	I0731 21:04:06.196923  188133 system_pods.go:61] "kube-apiserver-no-preload-916885" [c53d3e94-2b2d-4ad5-a0a2-54c519a4c907] Running
	I0731 21:04:06.196929  188133 system_pods.go:61] "kube-controller-manager-no-preload-916885" [8de7eaf4-d6e7-41dc-a206-645821682ab2] Running
	I0731 21:04:06.196933  188133 system_pods.go:61] "kube-proxy-b4h2z" [328ebd98-accf-43da-ae60-40fc93f34116] Running
	I0731 21:04:06.196938  188133 system_pods.go:61] "kube-scheduler-no-preload-916885" [e6d18e4c-8e0d-4332-8fc3-2696261447ac] Running
	I0731 21:04:06.196945  188133 system_pods.go:61] "metrics-server-78fcd8795b-86m8h" [3c4df12a-3d52-48dc-9998-587565d13dca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:04:06.196950  188133 system_pods.go:61] "storage-provisioner" [6bfc781b-1370-4460-8018-a1279e37b39d] Running
	I0731 21:04:06.196960  188133 system_pods.go:74] duration metric: took 180.999269ms to wait for pod list to return data ...
	I0731 21:04:06.196970  188133 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:04:06.394499  188133 default_sa.go:45] found service account: "default"
	I0731 21:04:06.394530  188133 default_sa.go:55] duration metric: took 197.552628ms for default service account to be created ...
	I0731 21:04:06.394539  188133 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:04:06.598314  188133 system_pods.go:86] 9 kube-system pods found
	I0731 21:04:06.598345  188133 system_pods.go:89] "coredns-5cfdc65f69-9qnjq" [2350f15d-0e3d-429f-a21f-8cbd41407d7e] Running
	I0731 21:04:06.598354  188133 system_pods.go:89] "coredns-5cfdc65f69-bqgfg" [9010990b-36d5-4c0d-adc9-5d9483bd5d44] Running
	I0731 21:04:06.598361  188133 system_pods.go:89] "etcd-no-preload-916885" [951e730b-b153-4f75-9f7f-82d774e01853] Running
	I0731 21:04:06.598370  188133 system_pods.go:89] "kube-apiserver-no-preload-916885" [c53d3e94-2b2d-4ad5-a0a2-54c519a4c907] Running
	I0731 21:04:06.598376  188133 system_pods.go:89] "kube-controller-manager-no-preload-916885" [8de7eaf4-d6e7-41dc-a206-645821682ab2] Running
	I0731 21:04:06.598389  188133 system_pods.go:89] "kube-proxy-b4h2z" [328ebd98-accf-43da-ae60-40fc93f34116] Running
	I0731 21:04:06.598397  188133 system_pods.go:89] "kube-scheduler-no-preload-916885" [e6d18e4c-8e0d-4332-8fc3-2696261447ac] Running
	I0731 21:04:06.598408  188133 system_pods.go:89] "metrics-server-78fcd8795b-86m8h" [3c4df12a-3d52-48dc-9998-587565d13dca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:04:06.598419  188133 system_pods.go:89] "storage-provisioner" [6bfc781b-1370-4460-8018-a1279e37b39d] Running
	I0731 21:04:06.598430  188133 system_pods.go:126] duration metric: took 203.884264ms to wait for k8s-apps to be running ...
	I0731 21:04:06.598442  188133 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:04:06.598498  188133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:04:06.613642  188133 system_svc.go:56] duration metric: took 15.190132ms WaitForService to wait for kubelet
	I0731 21:04:06.613675  188133 kubeadm.go:582] duration metric: took 9.4280531s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:04:06.613705  188133 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:04:06.794163  188133 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:04:06.794191  188133 node_conditions.go:123] node cpu capacity is 2
	I0731 21:04:06.794204  188133 node_conditions.go:105] duration metric: took 180.492992ms to run NodePressure ...
	I0731 21:04:06.794218  188133 start.go:241] waiting for startup goroutines ...
	I0731 21:04:06.794227  188133 start.go:246] waiting for cluster config update ...
	I0731 21:04:06.794239  188133 start.go:255] writing updated cluster config ...
	I0731 21:04:06.794547  188133 ssh_runner.go:195] Run: rm -f paused
	I0731 21:04:06.844118  188133 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0731 21:04:06.846234  188133 out.go:177] * Done! kubectl is now configured to use "no-preload-916885" cluster and "default" namespace by default
	I0731 21:04:04.015079  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:06.514907  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:08.514958  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:11.014341  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:13.514956  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:14.014985  187862 pod_ready.go:81] duration metric: took 4m0.007784922s for pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace to be "Ready" ...
	E0731 21:04:14.015013  187862 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0731 21:04:14.015020  187862 pod_ready.go:38] duration metric: took 4m6.056814749s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:04:14.015034  187862 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:04:14.015079  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:04:14.015127  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:04:14.086254  187862 cri.go:89] found id: "dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:14.086283  187862 cri.go:89] found id: ""
	I0731 21:04:14.086293  187862 logs.go:276] 1 containers: [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473]
	I0731 21:04:14.086368  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.091267  187862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:04:14.091334  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:04:14.138577  187862 cri.go:89] found id: "7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:14.138613  187862 cri.go:89] found id: ""
	I0731 21:04:14.138624  187862 logs.go:276] 1 containers: [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e]
	I0731 21:04:14.138696  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.143245  187862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:04:14.143315  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:04:14.182295  187862 cri.go:89] found id: "1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:14.182325  187862 cri.go:89] found id: ""
	I0731 21:04:14.182336  187862 logs.go:276] 1 containers: [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084]
	I0731 21:04:14.182400  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.186861  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:04:14.186936  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:04:14.230524  187862 cri.go:89] found id: "3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:14.230547  187862 cri.go:89] found id: ""
	I0731 21:04:14.230555  187862 logs.go:276] 1 containers: [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5]
	I0731 21:04:14.230609  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.235285  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:04:14.235354  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:04:14.279188  187862 cri.go:89] found id: "b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:14.279209  187862 cri.go:89] found id: ""
	I0731 21:04:14.279217  187862 logs.go:276] 1 containers: [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845]
	I0731 21:04:14.279268  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.284280  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:04:14.284362  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:04:14.333736  187862 cri.go:89] found id: "0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:14.333764  187862 cri.go:89] found id: ""
	I0731 21:04:14.333774  187862 logs.go:276] 1 containers: [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f]
	I0731 21:04:14.333830  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.338652  187862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:04:14.338717  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:04:14.380632  187862 cri.go:89] found id: ""
	I0731 21:04:14.380663  187862 logs.go:276] 0 containers: []
	W0731 21:04:14.380672  187862 logs.go:278] No container was found matching "kindnet"
	I0731 21:04:14.380678  187862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:04:14.380747  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:04:14.424705  187862 cri.go:89] found id: "919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:14.424727  187862 cri.go:89] found id: "c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:14.424732  187862 cri.go:89] found id: ""
	I0731 21:04:14.424741  187862 logs.go:276] 2 containers: [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2]
	I0731 21:04:14.424801  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.429310  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.434243  187862 logs.go:123] Gathering logs for kubelet ...
	I0731 21:04:14.434267  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:04:14.490743  187862 logs.go:123] Gathering logs for etcd [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e] ...
	I0731 21:04:14.490782  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:14.536575  187862 logs.go:123] Gathering logs for coredns [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084] ...
	I0731 21:04:14.536613  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:14.585952  187862 logs.go:123] Gathering logs for kube-proxy [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845] ...
	I0731 21:04:14.585986  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:14.626198  187862 logs.go:123] Gathering logs for container status ...
	I0731 21:04:14.626228  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:04:14.672674  187862 logs.go:123] Gathering logs for storage-provisioner [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb] ...
	I0731 21:04:14.672712  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:14.711759  187862 logs.go:123] Gathering logs for storage-provisioner [c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2] ...
	I0731 21:04:14.711788  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:14.757020  187862 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:04:14.757047  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:04:15.286344  187862 logs.go:123] Gathering logs for dmesg ...
	I0731 21:04:15.286393  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:04:15.301933  187862 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:04:15.301969  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:04:15.451532  187862 logs.go:123] Gathering logs for kube-apiserver [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473] ...
	I0731 21:04:15.451566  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:15.502398  187862 logs.go:123] Gathering logs for kube-scheduler [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5] ...
	I0731 21:04:15.502443  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:15.544678  187862 logs.go:123] Gathering logs for kube-controller-manager [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f] ...
	I0731 21:04:15.544719  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:17.729291  188656 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 21:04:17.730290  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:04:17.730512  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:04:18.104050  187862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:04:18.121028  187862 api_server.go:72] duration metric: took 4m17.382743031s to wait for apiserver process to appear ...
	I0731 21:04:18.121061  187862 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:04:18.121109  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:04:18.121179  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:04:18.165472  187862 cri.go:89] found id: "dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:18.165498  187862 cri.go:89] found id: ""
	I0731 21:04:18.165507  187862 logs.go:276] 1 containers: [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473]
	I0731 21:04:18.165559  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.169592  187862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:04:18.169663  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:04:18.216918  187862 cri.go:89] found id: "7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:18.216942  187862 cri.go:89] found id: ""
	I0731 21:04:18.216951  187862 logs.go:276] 1 containers: [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e]
	I0731 21:04:18.217015  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.221467  187862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:04:18.221546  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:04:18.267066  187862 cri.go:89] found id: "1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:18.267089  187862 cri.go:89] found id: ""
	I0731 21:04:18.267098  187862 logs.go:276] 1 containers: [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084]
	I0731 21:04:18.267164  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.271583  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:04:18.271662  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:04:18.316381  187862 cri.go:89] found id: "3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:18.316404  187862 cri.go:89] found id: ""
	I0731 21:04:18.316412  187862 logs.go:276] 1 containers: [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5]
	I0731 21:04:18.316472  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.320859  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:04:18.320932  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:04:18.365366  187862 cri.go:89] found id: "b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:18.365396  187862 cri.go:89] found id: ""
	I0731 21:04:18.365410  187862 logs.go:276] 1 containers: [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845]
	I0731 21:04:18.365476  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.369933  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:04:18.370019  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:04:18.411121  187862 cri.go:89] found id: "0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:18.411143  187862 cri.go:89] found id: ""
	I0731 21:04:18.411152  187862 logs.go:276] 1 containers: [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f]
	I0731 21:04:18.411203  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.415493  187862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:04:18.415561  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:04:18.453040  187862 cri.go:89] found id: ""
	I0731 21:04:18.453069  187862 logs.go:276] 0 containers: []
	W0731 21:04:18.453078  187862 logs.go:278] No container was found matching "kindnet"
	I0731 21:04:18.453085  187862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:04:18.453153  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:04:18.499335  187862 cri.go:89] found id: "919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:18.499359  187862 cri.go:89] found id: "c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:18.499363  187862 cri.go:89] found id: ""
	I0731 21:04:18.499371  187862 logs.go:276] 2 containers: [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2]
	I0731 21:04:18.499446  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.504353  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.508619  187862 logs.go:123] Gathering logs for kubelet ...
	I0731 21:04:18.508640  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:04:18.562692  187862 logs.go:123] Gathering logs for kube-apiserver [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473] ...
	I0731 21:04:18.562732  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:18.623405  187862 logs.go:123] Gathering logs for coredns [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084] ...
	I0731 21:04:18.623446  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:18.679472  187862 logs.go:123] Gathering logs for kube-scheduler [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5] ...
	I0731 21:04:18.679510  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:18.728893  187862 logs.go:123] Gathering logs for storage-provisioner [c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2] ...
	I0731 21:04:18.728933  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:18.770963  187862 logs.go:123] Gathering logs for container status ...
	I0731 21:04:18.770994  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:04:18.819353  187862 logs.go:123] Gathering logs for dmesg ...
	I0731 21:04:18.819385  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:04:18.835654  187862 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:04:18.835684  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:04:18.947479  187862 logs.go:123] Gathering logs for etcd [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e] ...
	I0731 21:04:18.947516  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:18.995005  187862 logs.go:123] Gathering logs for kube-proxy [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845] ...
	I0731 21:04:18.995043  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:19.033246  187862 logs.go:123] Gathering logs for kube-controller-manager [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f] ...
	I0731 21:04:19.033274  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:19.092703  187862 logs.go:123] Gathering logs for storage-provisioner [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb] ...
	I0731 21:04:19.092740  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:19.129738  187862 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:04:19.129769  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:04:22.058935  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 21:04:22.063496  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 200:
	ok
	I0731 21:04:22.064670  187862 api_server.go:141] control plane version: v1.30.3
	I0731 21:04:22.064690  187862 api_server.go:131] duration metric: took 3.943623055s to wait for apiserver health ...
	I0731 21:04:22.064699  187862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:04:22.064721  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:04:22.064771  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:04:22.103710  187862 cri.go:89] found id: "dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:22.103733  187862 cri.go:89] found id: ""
	I0731 21:04:22.103741  187862 logs.go:276] 1 containers: [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473]
	I0731 21:04:22.103798  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.108133  187862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:04:22.108203  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:04:22.159120  187862 cri.go:89] found id: "7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:22.159145  187862 cri.go:89] found id: ""
	I0731 21:04:22.159155  187862 logs.go:276] 1 containers: [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e]
	I0731 21:04:22.159213  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.165107  187862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:04:22.165169  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:04:22.202426  187862 cri.go:89] found id: "1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:22.202454  187862 cri.go:89] found id: ""
	I0731 21:04:22.202464  187862 logs.go:276] 1 containers: [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084]
	I0731 21:04:22.202524  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.206785  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:04:22.206842  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:04:22.245008  187862 cri.go:89] found id: "3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:22.245039  187862 cri.go:89] found id: ""
	I0731 21:04:22.245050  187862 logs.go:276] 1 containers: [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5]
	I0731 21:04:22.245111  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.249467  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:04:22.249548  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:04:22.731353  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:04:22.731627  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:04:22.298105  187862 cri.go:89] found id: "b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:22.298135  187862 cri.go:89] found id: ""
	I0731 21:04:22.298145  187862 logs.go:276] 1 containers: [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845]
	I0731 21:04:22.298209  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.302845  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:04:22.302902  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:04:22.346868  187862 cri.go:89] found id: "0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:22.346898  187862 cri.go:89] found id: ""
	I0731 21:04:22.346909  187862 logs.go:276] 1 containers: [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f]
	I0731 21:04:22.346978  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.351246  187862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:04:22.351313  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:04:22.389698  187862 cri.go:89] found id: ""
	I0731 21:04:22.389730  187862 logs.go:276] 0 containers: []
	W0731 21:04:22.389742  187862 logs.go:278] No container was found matching "kindnet"
	I0731 21:04:22.389751  187862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:04:22.389817  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:04:22.425212  187862 cri.go:89] found id: "919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:22.425234  187862 cri.go:89] found id: "c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:22.425238  187862 cri.go:89] found id: ""
	I0731 21:04:22.425245  187862 logs.go:276] 2 containers: [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2]
	I0731 21:04:22.425298  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.429584  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.433471  187862 logs.go:123] Gathering logs for kube-controller-manager [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f] ...
	I0731 21:04:22.433496  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:22.490354  187862 logs.go:123] Gathering logs for storage-provisioner [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb] ...
	I0731 21:04:22.490390  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:22.530117  187862 logs.go:123] Gathering logs for dmesg ...
	I0731 21:04:22.530146  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:04:22.545249  187862 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:04:22.545281  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:04:22.658074  187862 logs.go:123] Gathering logs for kube-apiserver [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473] ...
	I0731 21:04:22.658115  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:22.711537  187862 logs.go:123] Gathering logs for etcd [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e] ...
	I0731 21:04:22.711573  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:22.758644  187862 logs.go:123] Gathering logs for coredns [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084] ...
	I0731 21:04:22.758685  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:22.796716  187862 logs.go:123] Gathering logs for kube-scheduler [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5] ...
	I0731 21:04:22.796751  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:22.843502  187862 logs.go:123] Gathering logs for storage-provisioner [c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2] ...
	I0731 21:04:22.843538  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:22.881738  187862 logs.go:123] Gathering logs for kubelet ...
	I0731 21:04:22.881765  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:04:22.936317  187862 logs.go:123] Gathering logs for kube-proxy [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845] ...
	I0731 21:04:22.936360  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:22.977562  187862 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:04:22.977592  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:04:23.354873  187862 logs.go:123] Gathering logs for container status ...
	I0731 21:04:23.354921  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:04:25.917553  187862 system_pods.go:59] 8 kube-system pods found
	I0731 21:04:25.917588  187862 system_pods.go:61] "coredns-7db6d8ff4d-2ks55" [f5ad9d76-5cdc-430e-8933-7e72a2dda95f] Running
	I0731 21:04:25.917593  187862 system_pods.go:61] "etcd-embed-certs-831240" [5236ad06-90d9-48f1-964a-efa8f56ee8b5] Running
	I0731 21:04:25.917597  187862 system_pods.go:61] "kube-apiserver-embed-certs-831240" [06290f48-0a7b-4d88-9e61-af4c7dde9acd] Running
	I0731 21:04:25.917601  187862 system_pods.go:61] "kube-controller-manager-embed-certs-831240" [5d669fab-16cd-4bc6-a880-a1ecfb5d55b2] Running
	I0731 21:04:25.917604  187862 system_pods.go:61] "kube-proxy-x662j" [9ad0d8a8-94b4-4f3e-b5da-4e5585c28d21] Running
	I0731 21:04:25.917608  187862 system_pods.go:61] "kube-scheduler-embed-certs-831240" [cf9c5922-6468-46e7-84c1-1841f5bc3446] Running
	I0731 21:04:25.917614  187862 system_pods.go:61] "metrics-server-569cc877fc-slbkm" [f93f674b-1f0e-443b-ac06-9c2a5234eeea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:04:25.917624  187862 system_pods.go:61] "storage-provisioner" [d3d5fa24-96e8-4ab5-9887-62ff8b82f21d] Running
	I0731 21:04:25.917635  187862 system_pods.go:74] duration metric: took 3.852929636s to wait for pod list to return data ...
	I0731 21:04:25.917649  187862 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:04:25.920234  187862 default_sa.go:45] found service account: "default"
	I0731 21:04:25.920256  187862 default_sa.go:55] duration metric: took 2.600194ms for default service account to be created ...
	I0731 21:04:25.920264  187862 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:04:25.926296  187862 system_pods.go:86] 8 kube-system pods found
	I0731 21:04:25.926325  187862 system_pods.go:89] "coredns-7db6d8ff4d-2ks55" [f5ad9d76-5cdc-430e-8933-7e72a2dda95f] Running
	I0731 21:04:25.926330  187862 system_pods.go:89] "etcd-embed-certs-831240" [5236ad06-90d9-48f1-964a-efa8f56ee8b5] Running
	I0731 21:04:25.926334  187862 system_pods.go:89] "kube-apiserver-embed-certs-831240" [06290f48-0a7b-4d88-9e61-af4c7dde9acd] Running
	I0731 21:04:25.926338  187862 system_pods.go:89] "kube-controller-manager-embed-certs-831240" [5d669fab-16cd-4bc6-a880-a1ecfb5d55b2] Running
	I0731 21:04:25.926342  187862 system_pods.go:89] "kube-proxy-x662j" [9ad0d8a8-94b4-4f3e-b5da-4e5585c28d21] Running
	I0731 21:04:25.926346  187862 system_pods.go:89] "kube-scheduler-embed-certs-831240" [cf9c5922-6468-46e7-84c1-1841f5bc3446] Running
	I0731 21:04:25.926352  187862 system_pods.go:89] "metrics-server-569cc877fc-slbkm" [f93f674b-1f0e-443b-ac06-9c2a5234eeea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:04:25.926356  187862 system_pods.go:89] "storage-provisioner" [d3d5fa24-96e8-4ab5-9887-62ff8b82f21d] Running
	I0731 21:04:25.926365  187862 system_pods.go:126] duration metric: took 6.094538ms to wait for k8s-apps to be running ...
	I0731 21:04:25.926373  187862 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:04:25.926433  187862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:04:25.945225  187862 system_svc.go:56] duration metric: took 18.837835ms WaitForService to wait for kubelet
	I0731 21:04:25.945264  187862 kubeadm.go:582] duration metric: took 4m25.206984451s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:04:25.945294  187862 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:04:25.948480  187862 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:04:25.948506  187862 node_conditions.go:123] node cpu capacity is 2
	I0731 21:04:25.948520  187862 node_conditions.go:105] duration metric: took 3.219175ms to run NodePressure ...
	I0731 21:04:25.948535  187862 start.go:241] waiting for startup goroutines ...
	I0731 21:04:25.948543  187862 start.go:246] waiting for cluster config update ...
	I0731 21:04:25.948556  187862 start.go:255] writing updated cluster config ...
	I0731 21:04:25.949317  187862 ssh_runner.go:195] Run: rm -f paused
	I0731 21:04:26.000525  187862 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 21:04:26.002719  187862 out.go:177] * Done! kubectl is now configured to use "embed-certs-831240" cluster and "default" namespace by default
	I0731 21:04:32.732572  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:04:32.732835  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:04:52.734257  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:04:52.734530  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:05:32.739465  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:05:32.739778  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:05:32.739796  188656 kubeadm.go:310] 
	I0731 21:05:32.739854  188656 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 21:05:32.739962  188656 kubeadm.go:310] 		timed out waiting for the condition
	I0731 21:05:32.739988  188656 kubeadm.go:310] 
	I0731 21:05:32.740034  188656 kubeadm.go:310] 	This error is likely caused by:
	I0731 21:05:32.740083  188656 kubeadm.go:310] 		- The kubelet is not running
	I0731 21:05:32.740230  188656 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 21:05:32.740245  188656 kubeadm.go:310] 
	I0731 21:05:32.740393  188656 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 21:05:32.740441  188656 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 21:05:32.740485  188656 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 21:05:32.740494  188656 kubeadm.go:310] 
	I0731 21:05:32.740624  188656 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 21:05:32.740741  188656 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 21:05:32.740752  188656 kubeadm.go:310] 
	I0731 21:05:32.740888  188656 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 21:05:32.741008  188656 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 21:05:32.741084  188656 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 21:05:32.741145  188656 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 21:05:32.741152  188656 kubeadm.go:310] 
	I0731 21:05:32.741834  188656 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:05:32.741967  188656 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 21:05:32.742066  188656 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0731 21:05:32.742264  188656 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0731 21:05:32.742340  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:05:33.227380  188656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:05:33.243864  188656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:05:33.254208  188656 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:05:33.254234  188656 kubeadm.go:157] found existing configuration files:
	
	I0731 21:05:33.254313  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:05:33.264766  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:05:33.264846  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:05:33.275517  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:05:33.286281  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:05:33.286358  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:05:33.297108  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:05:33.307555  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:05:33.307627  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:05:33.318193  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:05:33.328155  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:05:33.328220  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:05:33.338088  188656 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:05:33.569897  188656 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:07:29.725230  188656 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 21:07:29.725381  188656 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 21:07:29.726868  188656 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 21:07:29.726959  188656 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:07:29.727064  188656 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:07:29.727204  188656 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:07:29.727322  188656 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:07:29.727389  188656 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:07:29.729525  188656 out.go:204]   - Generating certificates and keys ...
	I0731 21:07:29.729659  188656 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:07:29.729761  188656 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:07:29.729918  188656 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:07:29.730026  188656 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:07:29.730126  188656 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:07:29.730268  188656 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:07:29.730369  188656 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:07:29.730461  188656 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:07:29.730555  188656 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:07:29.730658  188656 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:07:29.730713  188656 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:07:29.730790  188656 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:07:29.730856  188656 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:07:29.730931  188656 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:07:29.731014  188656 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:07:29.731111  188656 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:07:29.731248  188656 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:07:29.731339  188656 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:07:29.731395  188656 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:07:29.731486  188656 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:07:29.733052  188656 out.go:204]   - Booting up control plane ...
	I0731 21:07:29.733146  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:07:29.733226  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:07:29.733305  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:07:29.733454  188656 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:07:29.733656  188656 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 21:07:29.733735  188656 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 21:07:29.733830  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.734048  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.734116  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.734275  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.734331  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.734543  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.734642  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.734868  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.734966  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.735234  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.735252  188656 kubeadm.go:310] 
	I0731 21:07:29.735313  188656 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 21:07:29.735376  188656 kubeadm.go:310] 		timed out waiting for the condition
	I0731 21:07:29.735385  188656 kubeadm.go:310] 
	I0731 21:07:29.735432  188656 kubeadm.go:310] 	This error is likely caused by:
	I0731 21:07:29.735480  188656 kubeadm.go:310] 		- The kubelet is not running
	I0731 21:07:29.735624  188656 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 21:07:29.735634  188656 kubeadm.go:310] 
	I0731 21:07:29.735779  188656 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 21:07:29.735830  188656 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 21:07:29.735879  188656 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 21:07:29.735889  188656 kubeadm.go:310] 
	I0731 21:07:29.736038  188656 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 21:07:29.736129  188656 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 21:07:29.736141  188656 kubeadm.go:310] 
	I0731 21:07:29.736241  188656 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 21:07:29.736315  188656 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 21:07:29.736400  188656 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 21:07:29.736480  188656 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 21:07:29.736537  188656 kubeadm.go:310] 
	I0731 21:07:29.736579  188656 kubeadm.go:394] duration metric: took 7m58.053099483s to StartCluster
	I0731 21:07:29.736660  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:07:29.736793  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:07:29.802897  188656 cri.go:89] found id: ""
	I0731 21:07:29.802932  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.802945  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:07:29.802953  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:07:29.803021  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:07:29.840059  188656 cri.go:89] found id: ""
	I0731 21:07:29.840088  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.840098  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:07:29.840106  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:07:29.840178  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:07:29.881030  188656 cri.go:89] found id: ""
	I0731 21:07:29.881058  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.881066  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:07:29.881073  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:07:29.881150  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:07:29.923495  188656 cri.go:89] found id: ""
	I0731 21:07:29.923524  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.923532  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:07:29.923538  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:07:29.923604  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:07:29.966128  188656 cri.go:89] found id: ""
	I0731 21:07:29.966156  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.966164  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:07:29.966171  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:07:29.966236  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:07:30.007648  188656 cri.go:89] found id: ""
	I0731 21:07:30.007678  188656 logs.go:276] 0 containers: []
	W0731 21:07:30.007687  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:07:30.007693  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:07:30.007748  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:07:30.047857  188656 cri.go:89] found id: ""
	I0731 21:07:30.047887  188656 logs.go:276] 0 containers: []
	W0731 21:07:30.047903  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:07:30.047909  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:07:30.047959  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:07:30.087245  188656 cri.go:89] found id: ""
	I0731 21:07:30.087275  188656 logs.go:276] 0 containers: []
	W0731 21:07:30.087283  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:07:30.087294  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:07:30.087308  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:07:30.168205  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:07:30.168235  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:07:30.168256  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:07:30.276908  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:07:30.276951  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:07:30.322993  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:07:30.323030  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:07:30.375237  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:07:30.375287  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0731 21:07:30.392523  188656 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0731 21:07:30.392579  188656 out.go:239] * 
	W0731 21:07:30.392653  188656 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:07:30.392683  188656 out.go:239] * 
	W0731 21:07:30.393845  188656 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 21:07:30.397498  188656 out.go:177] 
	W0731 21:07:30.398890  188656 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:07:30.398959  188656 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0731 21:07:30.398995  188656 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0731 21:07:30.401295  188656 out.go:177] 
	
	
	==> CRI-O <==
	Jul 31 21:13:28 embed-certs-831240 crio[737]: time="2024-07-31 21:13:28.093163003Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a1ba259d456e2257982823a55ecfd778b2259e15bb8f403822b55c895440d528,Metadata:&PodSandboxMetadata{Name:busybox,Uid:e9dc9efd-eba1-4457-8c17-44c18ddc2986,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722459605994879857,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9dc9efd-eba1-4457-8c17-44c18ddc2986,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T20:59:58.046332882Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9346eed236060a0f0a3cf63e6c1507c75d7935b16321758e8f306783f7dd3c6d,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-2ks55,Uid:f5ad9d76-5cdc-430e-8933-7e72a2dda95f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722459605889377
005,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-2ks55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5ad9d76-5cdc-430e-8933-7e72a2dda95f,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T20:59:58.046341189Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4b1c21306128971e569ab1c8865502b46da51404bd2b73cdeb95749adf4c477a,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-slbkm,Uid:f93f674b-1f0e-443b-ac06-9c2a5234eeea,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722459604092193880,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-slbkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f93f674b-1f0e-443b-ac06-9c2a5234eeea,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T20:59:58.
046329576Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4c6d4159b4e576518d77c4b7bc80dfd9b4dff64edb90b61ca3d7a24e86ca1a0e,Metadata:&PodSandboxMetadata{Name:kube-proxy-x662j,Uid:9ad0d8a8-94b4-4f3e-b5da-4e5585c28d21,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722459598361809734,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-x662j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ad0d8a8-94b4-4f3e-b5da-4e5585c28d21,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T20:59:58.046338258Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:150427d1d3a85e8448c9603e3d2bdaae011a8c63ad5cb5d83e844866dd6b3467,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:d3d5fa24-96e8-4ab5-9887-62ff8b82f21d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722459598353594284,Labels:map[string
]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3d5fa24-96e8-4ab5-9887-62ff8b82f21d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.i
o/config.seen: 2024-07-31T20:59:58.046331174Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3603f0355113109c8b0f2f2a3c6c74ea1e1e58426d061ad4f10dc3bca2780ff8,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-831240,Uid:9cb75276836c0666f9aaf558c691b62a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722459593538473017,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cb75276836c0666f9aaf558c691b62a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9cb75276836c0666f9aaf558c691b62a,kubernetes.io/config.seen: 2024-07-31T20:59:53.049366122Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:47c3808cbb510b243766ce95854494a3ac6c0f6f82299b2da0e5a23884cc3674,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-831240,Uid:2379a05be7274
2c63e504be6c05a56c0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722459593522317931,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2379a05be72742c63e504be6c05a56c0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.92:8443,kubernetes.io/config.hash: 2379a05be72742c63e504be6c05a56c0,kubernetes.io/config.seen: 2024-07-31T20:59:53.049364149Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7b1d539936fe61442a4d02b8a0b417149eb06f3015c44faf114a78d0318600ca,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-831240,Uid:6e4acf8178011ec8033f5125bfb2873e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722459593510005249,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.
name: kube-scheduler-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e4acf8178011ec8033f5125bfb2873e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6e4acf8178011ec8033f5125bfb2873e,kubernetes.io/config.seen: 2024-07-31T20:59:53.049367800Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:126e48bab73273201a9f8f02134dd9861d34773b79946e5bdd0b02b33b02bdbc,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-831240,Uid:fe6ee627ad68fa4b9c68b699e5ec6f11,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722459593506182026,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe6ee627ad68fa4b9c68b699e5ec6f11,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.92:2379,kubernetes.io/config.hash: fe6ee627ad68fa4b9c68b699e5ec
6f11,kubernetes.io/config.seen: 2024-07-31T20:59:53.049358358Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=9b075f8c-d247-4e32-bb48-43376a05877b name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 21:13:28 embed-certs-831240 crio[737]: time="2024-07-31 21:13:28.094058922Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c11b39ef-a2c0-459d-8f89-307ec7c469d6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:13:28 embed-certs-831240 crio[737]: time="2024-07-31 21:13:28.094119521Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c11b39ef-a2c0-459d-8f89-307ec7c469d6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:13:28 embed-certs-831240 crio[737]: time="2024-07-31 21:13:28.094420479Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c232171f4c0eca21dc25a6c4d0f52c084e5a1a7af6d60912bf3730fc909b20e6,PodSandboxId:a1ba259d456e2257982823a55ecfd778b2259e15bb8f403822b55c895440d528,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722459608923995607,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9dc9efd-eba1-4457-8c17-44c18ddc2986,},Annotations:map[string]string{io.kubernetes.container.hash: 5666726c,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084,PodSandboxId:9346eed236060a0f0a3cf63e6c1507c75d7935b16321758e8f306783f7dd3c6d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459606116291685,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2ks55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5ad9d76-5cdc-430e-8933-7e72a2dda95f,},Annotations:map[string]string{io.kubernetes.container.hash: db490d7e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb,PodSandboxId:150427d1d3a85e8448c9603e3d2bdaae011a8c63ad5cb5d83e844866dd6b3467,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722459599180433548,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: d3d5fa24-96e8-4ab5-9887-62ff8b82f21d,},Annotations:map[string]string{io.kubernetes.container.hash: f6a709a3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2,PodSandboxId:150427d1d3a85e8448c9603e3d2bdaae011a8c63ad5cb5d83e844866dd6b3467,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722459598605937073,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d3d5fa24-96e8-4ab5-9887-62ff8b82f21d,},Annotations:map[string]string{io.kubernetes.container.hash: f6a709a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845,PodSandboxId:4c6d4159b4e576518d77c4b7bc80dfd9b4dff64edb90b61ca3d7a24e86ca1a0e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722459598560470090,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x662j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ad0d8a8-94b4-4f3e-b5da-4e5585c28
d21,},Annotations:map[string]string{io.kubernetes.container.hash: f9c9821,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5,PodSandboxId:7b1d539936fe61442a4d02b8a0b417149eb06f3015c44faf114a78d0318600ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722459593901395850,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e4acf8178011ec8033f5125bfb2873e,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e,PodSandboxId:126e48bab73273201a9f8f02134dd9861d34773b79946e5bdd0b02b33b02bdbc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722459593892773295,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe6ee627ad68fa4b9c68b699e5ec6f11,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: c24fb674,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473,PodSandboxId:47c3808cbb510b243766ce95854494a3ac6c0f6f82299b2da0e5a23884cc3674,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722459593906548374,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2379a05be72742c63e504be6c05a56c0,},Annotations:map[string]string{io.kubernetes.container.hash: 8
701a33f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f,PodSandboxId:3603f0355113109c8b0f2f2a3c6c74ea1e1e58426d061ad4f10dc3bca2780ff8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722459593903235672,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cb75276836c0666f9aaf558c691b62a,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c11b39ef-a2c0-459d-8f89-307ec7c469d6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:13:28 embed-certs-831240 crio[737]: time="2024-07-31 21:13:28.104567180Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4f895b2b-0a98-4ff8-b263-1609f3b128da name=/runtime.v1.RuntimeService/Version
	Jul 31 21:13:28 embed-certs-831240 crio[737]: time="2024-07-31 21:13:28.104742958Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4f895b2b-0a98-4ff8-b263-1609f3b128da name=/runtime.v1.RuntimeService/Version
	Jul 31 21:13:28 embed-certs-831240 crio[737]: time="2024-07-31 21:13:28.105542558Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d1c1c555-80d4-4dcc-834c-a6d9e23d753a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:13:28 embed-certs-831240 crio[737]: time="2024-07-31 21:13:28.105973128Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460408105954694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d1c1c555-80d4-4dcc-834c-a6d9e23d753a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:13:28 embed-certs-831240 crio[737]: time="2024-07-31 21:13:28.106411509Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a77b8ce0-25f0-4b4e-95cc-1eac3db921e0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:13:28 embed-certs-831240 crio[737]: time="2024-07-31 21:13:28.106462315Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a77b8ce0-25f0-4b4e-95cc-1eac3db921e0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:13:28 embed-certs-831240 crio[737]: time="2024-07-31 21:13:28.106648537Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c232171f4c0eca21dc25a6c4d0f52c084e5a1a7af6d60912bf3730fc909b20e6,PodSandboxId:a1ba259d456e2257982823a55ecfd778b2259e15bb8f403822b55c895440d528,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722459608923995607,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9dc9efd-eba1-4457-8c17-44c18ddc2986,},Annotations:map[string]string{io.kubernetes.container.hash: 5666726c,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084,PodSandboxId:9346eed236060a0f0a3cf63e6c1507c75d7935b16321758e8f306783f7dd3c6d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459606116291685,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2ks55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5ad9d76-5cdc-430e-8933-7e72a2dda95f,},Annotations:map[string]string{io.kubernetes.container.hash: db490d7e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb,PodSandboxId:150427d1d3a85e8448c9603e3d2bdaae011a8c63ad5cb5d83e844866dd6b3467,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722459599180433548,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: d3d5fa24-96e8-4ab5-9887-62ff8b82f21d,},Annotations:map[string]string{io.kubernetes.container.hash: f6a709a3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2,PodSandboxId:150427d1d3a85e8448c9603e3d2bdaae011a8c63ad5cb5d83e844866dd6b3467,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722459598605937073,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d3d5fa24-96e8-4ab5-9887-62ff8b82f21d,},Annotations:map[string]string{io.kubernetes.container.hash: f6a709a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845,PodSandboxId:4c6d4159b4e576518d77c4b7bc80dfd9b4dff64edb90b61ca3d7a24e86ca1a0e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722459598560470090,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x662j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ad0d8a8-94b4-4f3e-b5da-4e5585c28
d21,},Annotations:map[string]string{io.kubernetes.container.hash: f9c9821,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5,PodSandboxId:7b1d539936fe61442a4d02b8a0b417149eb06f3015c44faf114a78d0318600ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722459593901395850,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e4acf8178011ec8033f5125bfb2873e,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e,PodSandboxId:126e48bab73273201a9f8f02134dd9861d34773b79946e5bdd0b02b33b02bdbc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722459593892773295,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe6ee627ad68fa4b9c68b699e5ec6f11,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: c24fb674,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473,PodSandboxId:47c3808cbb510b243766ce95854494a3ac6c0f6f82299b2da0e5a23884cc3674,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722459593906548374,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2379a05be72742c63e504be6c05a56c0,},Annotations:map[string]string{io.kubernetes.container.hash: 8
701a33f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f,PodSandboxId:3603f0355113109c8b0f2f2a3c6c74ea1e1e58426d061ad4f10dc3bca2780ff8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722459593903235672,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cb75276836c0666f9aaf558c691b62a,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a77b8ce0-25f0-4b4e-95cc-1eac3db921e0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:13:28 embed-certs-831240 crio[737]: time="2024-07-31 21:13:28.147487382Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bc20f7cd-a3b8-428f-9bca-3b2a025cde91 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:13:28 embed-certs-831240 crio[737]: time="2024-07-31 21:13:28.147592238Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bc20f7cd-a3b8-428f-9bca-3b2a025cde91 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:13:28 embed-certs-831240 crio[737]: time="2024-07-31 21:13:28.148944898Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=89151ef3-0177-4a09-9e3c-25a986a1cc97 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:13:28 embed-certs-831240 crio[737]: time="2024-07-31 21:13:28.149405003Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460408149381071,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=89151ef3-0177-4a09-9e3c-25a986a1cc97 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:13:28 embed-certs-831240 crio[737]: time="2024-07-31 21:13:28.150033125Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=32086efb-220a-4603-94fa-de7f7df77300 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:13:28 embed-certs-831240 crio[737]: time="2024-07-31 21:13:28.150119141Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=32086efb-220a-4603-94fa-de7f7df77300 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:13:28 embed-certs-831240 crio[737]: time="2024-07-31 21:13:28.150306221Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c232171f4c0eca21dc25a6c4d0f52c084e5a1a7af6d60912bf3730fc909b20e6,PodSandboxId:a1ba259d456e2257982823a55ecfd778b2259e15bb8f403822b55c895440d528,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722459608923995607,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9dc9efd-eba1-4457-8c17-44c18ddc2986,},Annotations:map[string]string{io.kubernetes.container.hash: 5666726c,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084,PodSandboxId:9346eed236060a0f0a3cf63e6c1507c75d7935b16321758e8f306783f7dd3c6d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459606116291685,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2ks55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5ad9d76-5cdc-430e-8933-7e72a2dda95f,},Annotations:map[string]string{io.kubernetes.container.hash: db490d7e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb,PodSandboxId:150427d1d3a85e8448c9603e3d2bdaae011a8c63ad5cb5d83e844866dd6b3467,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722459599180433548,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: d3d5fa24-96e8-4ab5-9887-62ff8b82f21d,},Annotations:map[string]string{io.kubernetes.container.hash: f6a709a3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2,PodSandboxId:150427d1d3a85e8448c9603e3d2bdaae011a8c63ad5cb5d83e844866dd6b3467,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722459598605937073,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d3d5fa24-96e8-4ab5-9887-62ff8b82f21d,},Annotations:map[string]string{io.kubernetes.container.hash: f6a709a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845,PodSandboxId:4c6d4159b4e576518d77c4b7bc80dfd9b4dff64edb90b61ca3d7a24e86ca1a0e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722459598560470090,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x662j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ad0d8a8-94b4-4f3e-b5da-4e5585c28
d21,},Annotations:map[string]string{io.kubernetes.container.hash: f9c9821,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5,PodSandboxId:7b1d539936fe61442a4d02b8a0b417149eb06f3015c44faf114a78d0318600ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722459593901395850,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e4acf8178011ec8033f5125bfb2873e,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e,PodSandboxId:126e48bab73273201a9f8f02134dd9861d34773b79946e5bdd0b02b33b02bdbc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722459593892773295,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe6ee627ad68fa4b9c68b699e5ec6f11,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: c24fb674,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473,PodSandboxId:47c3808cbb510b243766ce95854494a3ac6c0f6f82299b2da0e5a23884cc3674,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722459593906548374,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2379a05be72742c63e504be6c05a56c0,},Annotations:map[string]string{io.kubernetes.container.hash: 8
701a33f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f,PodSandboxId:3603f0355113109c8b0f2f2a3c6c74ea1e1e58426d061ad4f10dc3bca2780ff8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722459593903235672,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cb75276836c0666f9aaf558c691b62a,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=32086efb-220a-4603-94fa-de7f7df77300 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:13:28 embed-certs-831240 crio[737]: time="2024-07-31 21:13:28.186254478Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6bc0c9e0-9aa0-4b26-b8a2-00ab0b69bcba name=/runtime.v1.RuntimeService/Version
	Jul 31 21:13:28 embed-certs-831240 crio[737]: time="2024-07-31 21:13:28.186347172Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6bc0c9e0-9aa0-4b26-b8a2-00ab0b69bcba name=/runtime.v1.RuntimeService/Version
	Jul 31 21:13:28 embed-certs-831240 crio[737]: time="2024-07-31 21:13:28.187765533Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=97644eca-bdc3-41dd-88ea-6cce4cdcacc4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:13:28 embed-certs-831240 crio[737]: time="2024-07-31 21:13:28.188167279Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460408188142825,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=97644eca-bdc3-41dd-88ea-6cce4cdcacc4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:13:28 embed-certs-831240 crio[737]: time="2024-07-31 21:13:28.189048500Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4c79dd63-a51d-4d13-b7ed-fcc2434fc2e1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:13:28 embed-certs-831240 crio[737]: time="2024-07-31 21:13:28.189103622Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c79dd63-a51d-4d13-b7ed-fcc2434fc2e1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:13:28 embed-certs-831240 crio[737]: time="2024-07-31 21:13:28.189336124Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c232171f4c0eca21dc25a6c4d0f52c084e5a1a7af6d60912bf3730fc909b20e6,PodSandboxId:a1ba259d456e2257982823a55ecfd778b2259e15bb8f403822b55c895440d528,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722459608923995607,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9dc9efd-eba1-4457-8c17-44c18ddc2986,},Annotations:map[string]string{io.kubernetes.container.hash: 5666726c,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084,PodSandboxId:9346eed236060a0f0a3cf63e6c1507c75d7935b16321758e8f306783f7dd3c6d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459606116291685,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2ks55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5ad9d76-5cdc-430e-8933-7e72a2dda95f,},Annotations:map[string]string{io.kubernetes.container.hash: db490d7e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb,PodSandboxId:150427d1d3a85e8448c9603e3d2bdaae011a8c63ad5cb5d83e844866dd6b3467,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722459599180433548,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: d3d5fa24-96e8-4ab5-9887-62ff8b82f21d,},Annotations:map[string]string{io.kubernetes.container.hash: f6a709a3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2,PodSandboxId:150427d1d3a85e8448c9603e3d2bdaae011a8c63ad5cb5d83e844866dd6b3467,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722459598605937073,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d3d5fa24-96e8-4ab5-9887-62ff8b82f21d,},Annotations:map[string]string{io.kubernetes.container.hash: f6a709a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845,PodSandboxId:4c6d4159b4e576518d77c4b7bc80dfd9b4dff64edb90b61ca3d7a24e86ca1a0e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722459598560470090,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x662j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ad0d8a8-94b4-4f3e-b5da-4e5585c28
d21,},Annotations:map[string]string{io.kubernetes.container.hash: f9c9821,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5,PodSandboxId:7b1d539936fe61442a4d02b8a0b417149eb06f3015c44faf114a78d0318600ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722459593901395850,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e4acf8178011ec8033f5125bfb2873e,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e,PodSandboxId:126e48bab73273201a9f8f02134dd9861d34773b79946e5bdd0b02b33b02bdbc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722459593892773295,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe6ee627ad68fa4b9c68b699e5ec6f11,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: c24fb674,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473,PodSandboxId:47c3808cbb510b243766ce95854494a3ac6c0f6f82299b2da0e5a23884cc3674,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722459593906548374,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2379a05be72742c63e504be6c05a56c0,},Annotations:map[string]string{io.kubernetes.container.hash: 8
701a33f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f,PodSandboxId:3603f0355113109c8b0f2f2a3c6c74ea1e1e58426d061ad4f10dc3bca2780ff8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722459593903235672,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cb75276836c0666f9aaf558c691b62a,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4c79dd63-a51d-4d13-b7ed-fcc2434fc2e1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c232171f4c0ec       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   a1ba259d456e2       busybox
	1a7f319ba94b3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   9346eed236060       coredns-7db6d8ff4d-2ks55
	919f3cf1d058c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       2                   150427d1d3a85       storage-provisioner
	c0ca8e260d6f1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   150427d1d3a85       storage-provisioner
	b51b7e8b0ab34       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago      Running             kube-proxy                1                   4c6d4159b4e57       kube-proxy-x662j
	dafbb34397064       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      13 minutes ago      Running             kube-apiserver            1                   47c3808cbb510       kube-apiserver-embed-certs-831240
	0854d075486b3       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      13 minutes ago      Running             kube-controller-manager   1                   3603f03551131       kube-controller-manager-embed-certs-831240
	3ac0d9edc6a97       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      13 minutes ago      Running             kube-scheduler            1                   7b1d539936fe6       kube-scheduler-embed-certs-831240
	7544698b6925d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   126e48bab7327       etcd-embed-certs-831240
	
	
	==> coredns [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35877 - 465 "HINFO IN 3264330224851131081.6087925659700021598. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01634638s
	
	
	==> describe nodes <==
	Name:               embed-certs-831240
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-831240
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825
	                    minikube.k8s.io/name=embed-certs-831240
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T20_50_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 20:50:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-831240
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 21:13:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 21:10:40 +0000   Wed, 31 Jul 2024 20:50:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 21:10:40 +0000   Wed, 31 Jul 2024 20:50:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 21:10:40 +0000   Wed, 31 Jul 2024 20:50:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 21:10:40 +0000   Wed, 31 Jul 2024 21:00:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.92
	  Hostname:    embed-certs-831240
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3ba3feff689b407f95c0441506aeade9
	  System UUID:                3ba3feff-689b-407f-95c0-441506aeade9
	  Boot ID:                    3d58d390-3b96-4c0d-8218-86dbdef3d594
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-7db6d8ff4d-2ks55                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-embed-certs-831240                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kube-apiserver-embed-certs-831240             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-embed-certs-831240    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-x662j                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-embed-certs-831240             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 metrics-server-569cc877fc-slbkm               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     22m                kubelet          Node embed-certs-831240 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node embed-certs-831240 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node embed-certs-831240 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeReady                22m                kubelet          Node embed-certs-831240 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node embed-certs-831240 event: Registered Node embed-certs-831240 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-831240 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-831240 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-831240 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-831240 event: Registered Node embed-certs-831240 in Controller
	
	
	==> dmesg <==
	[Jul31 20:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055797] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043149] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.146117] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.581222] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.601985] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.392567] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.060909] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.079130] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.163159] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +0.144212] systemd-fstab-generator[691]: Ignoring "noauto" option for root device
	[  +0.283662] systemd-fstab-generator[720]: Ignoring "noauto" option for root device
	[  +4.405540] systemd-fstab-generator[817]: Ignoring "noauto" option for root device
	[  +0.072373] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.778971] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +5.671234] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.341414] systemd-fstab-generator[1608]: Ignoring "noauto" option for root device
	[Jul31 21:00] kauditd_printk_skb: 67 callbacks suppressed
	[  +6.555378] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e] <==
	{"level":"info","ts":"2024-07-31T20:59:54.496347Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f0381c3cc77c8c9d","local-member-id":"d468df581a6d993d","added-peer-id":"d468df581a6d993d","added-peer-peer-urls":["https://192.168.39.92:2380"]}
	{"level":"info","ts":"2024-07-31T20:59:54.496472Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f0381c3cc77c8c9d","local-member-id":"d468df581a6d993d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T20:59:54.496519Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T20:59:54.533784Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T20:59:54.534163Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d468df581a6d993d","initial-advertise-peer-urls":["https://192.168.39.92:2380"],"listen-peer-urls":["https://192.168.39.92:2380"],"advertise-client-urls":["https://192.168.39.92:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.92:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T20:59:54.53422Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T20:59:54.534189Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.92:2380"}
	{"level":"info","ts":"2024-07-31T20:59:54.542965Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.92:2380"}
	{"level":"info","ts":"2024-07-31T20:59:55.908338Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d468df581a6d993d is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T20:59:55.908405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d468df581a6d993d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T20:59:55.908445Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d468df581a6d993d received MsgPreVoteResp from d468df581a6d993d at term 2"}
	{"level":"info","ts":"2024-07-31T20:59:55.908462Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d468df581a6d993d became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T20:59:55.908468Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d468df581a6d993d received MsgVoteResp from d468df581a6d993d at term 3"}
	{"level":"info","ts":"2024-07-31T20:59:55.908476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d468df581a6d993d became leader at term 3"}
	{"level":"info","ts":"2024-07-31T20:59:55.908486Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d468df581a6d993d elected leader d468df581a6d993d at term 3"}
	{"level":"info","ts":"2024-07-31T20:59:55.912218Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d468df581a6d993d","local-member-attributes":"{Name:embed-certs-831240 ClientURLs:[https://192.168.39.92:2379]}","request-path":"/0/members/d468df581a6d993d/attributes","cluster-id":"f0381c3cc77c8c9d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T20:59:55.912233Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T20:59:55.912406Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T20:59:55.912836Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T20:59:55.912896Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T20:59:55.914588Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T20:59:55.914985Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.92:2379"}
	{"level":"info","ts":"2024-07-31T21:09:55.940411Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":834}
	{"level":"info","ts":"2024-07-31T21:09:55.951471Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":834,"took":"10.646857ms","hash":3126182695,"current-db-size-bytes":2621440,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2621440,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-07-31T21:09:55.951531Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3126182695,"revision":834,"compact-revision":-1}
	
	
	==> kernel <==
	 21:13:28 up 14 min,  0 users,  load average: 0.13, 0.11, 0.09
	Linux embed-certs-831240 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473] <==
	I0731 21:07:58.299920       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:09:57.302115       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:09:57.302429       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0731 21:09:58.302951       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:09:58.303001       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 21:09:58.303009       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:09:58.303069       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:09:58.303173       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 21:09:58.304402       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:10:58.303916       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:10:58.304143       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 21:10:58.304176       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:10:58.305071       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:10:58.305290       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 21:10:58.305302       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:12:58.304444       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:12:58.304546       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 21:12:58.304556       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:12:58.305528       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:12:58.305651       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 21:12:58.305725       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f] <==
	I0731 21:07:41.096496       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:08:10.636822       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:08:11.104208       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:08:40.642388       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:08:41.112562       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:09:10.648346       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:09:11.121483       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:09:40.653387       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:09:41.129758       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:10:10.660904       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:10:11.138872       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:10:40.666397       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:10:41.146092       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 21:11:01.101320       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="855.578µs"
	E0731 21:11:10.674498       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:11:11.154511       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 21:11:14.099284       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="657.525µs"
	E0731 21:11:40.679952       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:11:41.163040       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:12:10.685414       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:12:11.171167       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:12:40.689899       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:12:41.180385       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:13:10.696333       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:13:11.188219       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845] <==
	I0731 20:59:58.755536       1 server_linux.go:69] "Using iptables proxy"
	I0731 20:59:58.765857       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.92"]
	I0731 20:59:58.798976       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 20:59:58.799019       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 20:59:58.799034       1 server_linux.go:165] "Using iptables Proxier"
	I0731 20:59:58.801627       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 20:59:58.801911       1 server.go:872] "Version info" version="v1.30.3"
	I0731 20:59:58.801936       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 20:59:58.804372       1 config.go:192] "Starting service config controller"
	I0731 20:59:58.804411       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 20:59:58.804455       1 config.go:101] "Starting endpoint slice config controller"
	I0731 20:59:58.804472       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 20:59:58.806416       1 config.go:319] "Starting node config controller"
	I0731 20:59:58.806448       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 20:59:58.905388       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 20:59:58.905467       1 shared_informer.go:320] Caches are synced for service config
	I0731 20:59:58.907001       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5] <==
	I0731 20:59:57.279856       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 20:59:57.279952       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0731 20:59:57.291326       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 20:59:57.291269       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 20:59:57.291459       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 20:59:57.291526       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 20:59:57.291750       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 20:59:57.291851       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 20:59:57.291860       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 20:59:57.291781       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 20:59:57.292060       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 20:59:57.292088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 20:59:57.292156       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 20:59:57.292872       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 20:59:57.292358       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 20:59:57.292906       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 20:59:57.292456       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 20:59:57.292995       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 20:59:57.292490       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 20:59:57.293010       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 20:59:57.292582       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 20:59:57.293095       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 20:59:57.292765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 20:59:57.293108       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0731 20:59:57.380803       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 21:10:53 embed-certs-831240 kubelet[948]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:10:53 embed-certs-831240 kubelet[948]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:10:53 embed-certs-831240 kubelet[948]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:11:01 embed-certs-831240 kubelet[948]: E0731 21:11:01.081525     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-slbkm" podUID="f93f674b-1f0e-443b-ac06-9c2a5234eeea"
	Jul 31 21:11:14 embed-certs-831240 kubelet[948]: E0731 21:11:14.081527     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-slbkm" podUID="f93f674b-1f0e-443b-ac06-9c2a5234eeea"
	Jul 31 21:11:29 embed-certs-831240 kubelet[948]: E0731 21:11:29.081599     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-slbkm" podUID="f93f674b-1f0e-443b-ac06-9c2a5234eeea"
	Jul 31 21:11:41 embed-certs-831240 kubelet[948]: E0731 21:11:41.081633     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-slbkm" podUID="f93f674b-1f0e-443b-ac06-9c2a5234eeea"
	Jul 31 21:11:53 embed-certs-831240 kubelet[948]: E0731 21:11:53.109889     948 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 21:11:53 embed-certs-831240 kubelet[948]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:11:53 embed-certs-831240 kubelet[948]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:11:53 embed-certs-831240 kubelet[948]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:11:53 embed-certs-831240 kubelet[948]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:11:55 embed-certs-831240 kubelet[948]: E0731 21:11:55.083750     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-slbkm" podUID="f93f674b-1f0e-443b-ac06-9c2a5234eeea"
	Jul 31 21:12:09 embed-certs-831240 kubelet[948]: E0731 21:12:09.081126     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-slbkm" podUID="f93f674b-1f0e-443b-ac06-9c2a5234eeea"
	Jul 31 21:12:20 embed-certs-831240 kubelet[948]: E0731 21:12:20.081426     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-slbkm" podUID="f93f674b-1f0e-443b-ac06-9c2a5234eeea"
	Jul 31 21:12:35 embed-certs-831240 kubelet[948]: E0731 21:12:35.084990     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-slbkm" podUID="f93f674b-1f0e-443b-ac06-9c2a5234eeea"
	Jul 31 21:12:47 embed-certs-831240 kubelet[948]: E0731 21:12:47.082216     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-slbkm" podUID="f93f674b-1f0e-443b-ac06-9c2a5234eeea"
	Jul 31 21:12:53 embed-certs-831240 kubelet[948]: E0731 21:12:53.105181     948 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 21:12:53 embed-certs-831240 kubelet[948]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:12:53 embed-certs-831240 kubelet[948]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:12:53 embed-certs-831240 kubelet[948]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:12:53 embed-certs-831240 kubelet[948]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:12:58 embed-certs-831240 kubelet[948]: E0731 21:12:58.081832     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-slbkm" podUID="f93f674b-1f0e-443b-ac06-9c2a5234eeea"
	Jul 31 21:13:10 embed-certs-831240 kubelet[948]: E0731 21:13:10.081612     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-slbkm" podUID="f93f674b-1f0e-443b-ac06-9c2a5234eeea"
	Jul 31 21:13:23 embed-certs-831240 kubelet[948]: E0731 21:13:23.083256     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-slbkm" podUID="f93f674b-1f0e-443b-ac06-9c2a5234eeea"
	
	
	==> storage-provisioner [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb] <==
	I0731 20:59:59.377369       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 20:59:59.399798       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 20:59:59.399907       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 21:00:16.807008       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 21:00:16.807203       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-831240_33fb3b93-9780-45ba-addc-4cd2a27f806b!
	I0731 21:00:16.808524       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3371e2f2-9fef-4856-9b93-ff0c113558f7", APIVersion:"v1", ResourceVersion:"586", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-831240_33fb3b93-9780-45ba-addc-4cd2a27f806b became leader
	I0731 21:00:16.908361       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-831240_33fb3b93-9780-45ba-addc-4cd2a27f806b!
	
	
	==> storage-provisioner [c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2] <==
	I0731 20:59:58.711340       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0731 20:59:58.715150       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-831240 -n embed-certs-831240
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-831240 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-slbkm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-831240 describe pod metrics-server-569cc877fc-slbkm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-831240 describe pod metrics-server-569cc877fc-slbkm: exit status 1 (63.307419ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-slbkm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-831240 describe pod metrics-server-569cc877fc-slbkm: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0731 21:07:34.577814  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0731 21:08:08.295204  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/enable-default-cni-341849/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0731 21:08:34.982450  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/calico-341849/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0731 21:08:53.990946  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/custom-flannel-341849/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0731 21:09:13.436028  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/flannel-341849/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0731 21:09:26.620268  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/bridge-341849/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0731 21:09:31.342881  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/enable-default-cni-341849/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0731 21:10:09.825507  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0731 21:10:21.068188  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/auto-341849/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0731 21:10:36.479336  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/flannel-341849/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0731 21:10:48.407943  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kindnet-341849/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0731 21:10:49.666190  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/bridge-341849/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0731 21:12:11.938479  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/calico-341849/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0731 21:12:30.946774  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/custom-flannel-341849/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0731 21:12:34.578099  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0731 21:13:08.295230  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/enable-default-cni-341849/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0731 21:13:12.872895  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0731 21:14:13.435896  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/flannel-341849/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0731 21:14:26.619550  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/bridge-341849/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0731 21:15:09.824866  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0731 21:15:21.067821  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/auto-341849/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0731 21:15:48.407619  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kindnet-341849/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-239115 -n old-k8s-version-239115
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-239115 -n old-k8s-version-239115: exit status 2 (223.208072ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-239115" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-239115 -n old-k8s-version-239115
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-239115 -n old-k8s-version-239115: exit status 2 (222.033102ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-239115 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-239115 logs -n 25: (1.567694215s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-341849 sudo                                  | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC |                     |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-341849 sudo                                  | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-831240                                  | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:50 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| ssh     | -p bridge-341849 sudo                                  | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-341849 sudo find                             | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-341849 sudo crio                             | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-341849                                       | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-248084 | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | disable-driver-mounts-248084                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:51 UTC |
	|         | default-k8s-diff-port-125614                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-831240            | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC | 31 Jul 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-831240                                  | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-916885             | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC | 31 Jul 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-916885                                   | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-125614  | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC | 31 Jul 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC |                     |
	|         | default-k8s-diff-port-125614                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-239115        | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-831240                 | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-831240                                  | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC | 31 Jul 24 21:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-916885                  | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-916885 --memory=2200                     | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 20:54 UTC | 31 Jul 24 21:04 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-125614       | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:54 UTC | 31 Jul 24 21:03 UTC |
	|         | default-k8s-diff-port-125614                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-239115                              | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:55 UTC | 31 Jul 24 20:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-239115             | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:55 UTC | 31 Jul 24 20:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-239115                              | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 20:55:13
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 20:55:13.835355  188656 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:55:13.835514  188656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:55:13.835525  188656 out.go:304] Setting ErrFile to fd 2...
	I0731 20:55:13.835531  188656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:55:13.835717  188656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 20:55:13.836233  188656 out.go:298] Setting JSON to false
	I0731 20:55:13.837146  188656 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9450,"bootTime":1722449864,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 20:55:13.837206  188656 start.go:139] virtualization: kvm guest
	I0731 20:55:13.839094  188656 out.go:177] * [old-k8s-version-239115] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 20:55:13.840630  188656 notify.go:220] Checking for updates...
	I0731 20:55:13.840638  188656 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 20:55:13.841884  188656 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 20:55:13.843054  188656 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 20:55:13.844295  188656 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 20:55:13.845348  188656 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 20:55:13.846480  188656 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 20:55:13.847974  188656 config.go:182] Loaded profile config "old-k8s-version-239115": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 20:55:13.848349  188656 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:55:13.848390  188656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:55:13.863017  188656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46065
	I0731 20:55:13.863418  188656 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:55:13.863927  188656 main.go:141] libmachine: Using API Version  1
	I0731 20:55:13.863980  188656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:55:13.864357  188656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:55:13.864625  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:55:13.866178  188656 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 20:55:13.867248  188656 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 20:55:13.867523  188656 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:55:13.867552  188656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:55:13.881922  188656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44705
	I0731 20:55:13.882304  188656 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:55:13.882707  188656 main.go:141] libmachine: Using API Version  1
	I0731 20:55:13.882729  188656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:55:13.883037  188656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:55:13.883214  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:55:13.917067  188656 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 20:55:13.918247  188656 start.go:297] selected driver: kvm2
	I0731 20:55:13.918260  188656 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-239115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:55:13.918396  188656 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 20:55:13.919323  188656 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:55:13.919428  188656 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-121704/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 20:55:13.934150  188656 install.go:137] /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0731 20:55:13.934506  188656 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 20:55:13.934569  188656 cni.go:84] Creating CNI manager for ""
	I0731 20:55:13.934583  188656 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:55:13.934630  188656 start.go:340] cluster config:
	{Name:old-k8s-version-239115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239115 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:55:13.934737  188656 iso.go:125] acquiring lock: {Name:mk69fc0fe37180b19ce91307b5e12ab2d0bd69fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:55:13.936401  188656 out.go:177] * Starting "old-k8s-version-239115" primary control-plane node in "old-k8s-version-239115" cluster
	I0731 20:55:13.769565  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:13.937700  188656 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 20:55:13.937735  188656 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0731 20:55:13.937743  188656 cache.go:56] Caching tarball of preloaded images
	I0731 20:55:13.937806  188656 preload.go:172] Found /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 20:55:13.937816  188656 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0731 20:55:13.937907  188656 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/config.json ...
	I0731 20:55:13.938068  188656 start.go:360] acquireMachinesLock for old-k8s-version-239115: {Name:mk0ee20c9dba367bb5e62f6affdfe6f589095d2a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 20:55:19.845616  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:22.917614  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:28.997601  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:32.069596  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:38.149607  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:41.221579  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:47.301587  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:50.373695  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:56.453611  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:59.525649  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:05.605640  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:08.677654  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:14.757599  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:17.829627  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:23.909581  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:26.981613  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:33.061611  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:36.133597  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:42.213638  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:45.285703  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:51.365653  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:54.437615  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:00.517627  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:03.589595  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:09.669666  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:12.741661  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:18.821643  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:21.893594  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:27.973636  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:31.045651  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:37.125619  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:40.197656  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:46.277679  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:49.349535  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:55.429634  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:58.501611  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:58:04.581620  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:58:07.653642  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:58:13.733571  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:58:16.805674  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:58:19.809697  188133 start.go:364] duration metric: took 4m15.439364065s to acquireMachinesLock for "no-preload-916885"
	I0731 20:58:19.809748  188133 start.go:96] Skipping create...Using existing machine configuration
	I0731 20:58:19.809756  188133 fix.go:54] fixHost starting: 
	I0731 20:58:19.810113  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:58:19.810149  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:58:19.825131  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40671
	I0731 20:58:19.825615  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:58:19.826110  188133 main.go:141] libmachine: Using API Version  1
	I0731 20:58:19.826132  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:58:19.826439  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:58:19.826616  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:19.826840  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetState
	I0731 20:58:19.828267  188133 fix.go:112] recreateIfNeeded on no-preload-916885: state=Stopped err=<nil>
	I0731 20:58:19.828294  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	W0731 20:58:19.828471  188133 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 20:58:19.829957  188133 out.go:177] * Restarting existing kvm2 VM for "no-preload-916885" ...
	I0731 20:58:19.807506  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:58:19.807579  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetMachineName
	I0731 20:58:19.807919  187862 buildroot.go:166] provisioning hostname "embed-certs-831240"
	I0731 20:58:19.807946  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetMachineName
	I0731 20:58:19.808126  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:58:19.809580  187862 machine.go:97] duration metric: took 4m37.431426503s to provisionDockerMachine
	I0731 20:58:19.809625  187862 fix.go:56] duration metric: took 4m37.4520345s for fixHost
	I0731 20:58:19.809631  187862 start.go:83] releasing machines lock for "embed-certs-831240", held for 4m37.452053341s
	W0731 20:58:19.809664  187862 start.go:714] error starting host: provision: host is not running
	W0731 20:58:19.809893  187862 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0731 20:58:19.809916  187862 start.go:729] Will try again in 5 seconds ...
	I0731 20:58:19.831221  188133 main.go:141] libmachine: (no-preload-916885) Calling .Start
	I0731 20:58:19.831409  188133 main.go:141] libmachine: (no-preload-916885) Ensuring networks are active...
	I0731 20:58:19.832210  188133 main.go:141] libmachine: (no-preload-916885) Ensuring network default is active
	I0731 20:58:19.832536  188133 main.go:141] libmachine: (no-preload-916885) Ensuring network mk-no-preload-916885 is active
	I0731 20:58:19.832885  188133 main.go:141] libmachine: (no-preload-916885) Getting domain xml...
	I0731 20:58:19.833563  188133 main.go:141] libmachine: (no-preload-916885) Creating domain...
	I0731 20:58:21.031310  188133 main.go:141] libmachine: (no-preload-916885) Waiting to get IP...
	I0731 20:58:21.032067  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:21.032519  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:21.032626  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:21.032509  189287 retry.go:31] will retry after 207.547113ms: waiting for machine to come up
	I0731 20:58:21.242229  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:21.242716  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:21.242797  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:21.242683  189287 retry.go:31] will retry after 307.483232ms: waiting for machine to come up
	I0731 20:58:21.552437  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:21.552954  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:21.552977  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:21.552911  189287 retry.go:31] will retry after 441.063904ms: waiting for machine to come up
	I0731 20:58:21.995514  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:21.995860  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:21.995903  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:21.995813  189287 retry.go:31] will retry after 596.915537ms: waiting for machine to come up
	I0731 20:58:22.594563  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:22.595037  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:22.595079  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:22.594988  189287 retry.go:31] will retry after 471.207023ms: waiting for machine to come up
	I0731 20:58:23.067499  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:23.067926  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:23.067950  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:23.067899  189287 retry.go:31] will retry after 756.851428ms: waiting for machine to come up
	I0731 20:58:23.826869  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:23.827277  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:23.827305  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:23.827232  189287 retry.go:31] will retry after 981.303239ms: waiting for machine to come up
	I0731 20:58:24.810830  187862 start.go:360] acquireMachinesLock for embed-certs-831240: {Name:mk0ee20c9dba367bb5e62f6affdfe6f589095d2a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 20:58:24.810239  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:24.810615  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:24.810651  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:24.810584  189287 retry.go:31] will retry after 1.18169902s: waiting for machine to come up
	I0731 20:58:25.994320  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:25.994700  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:25.994728  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:25.994635  189287 retry.go:31] will retry after 1.781207961s: waiting for machine to come up
	I0731 20:58:27.778381  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:27.778764  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:27.778805  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:27.778734  189287 retry.go:31] will retry after 1.885603462s: waiting for machine to come up
	I0731 20:58:29.665633  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:29.666049  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:29.666070  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:29.666026  189287 retry.go:31] will retry after 2.664379174s: waiting for machine to come up
	I0731 20:58:32.333226  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:32.333615  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:32.333644  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:32.333594  189287 retry.go:31] will retry after 2.932420774s: waiting for machine to come up
	I0731 20:58:35.267165  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:35.267527  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:35.267558  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:35.267496  189287 retry.go:31] will retry after 4.378841892s: waiting for machine to come up
	I0731 20:58:41.010483  188266 start.go:364] duration metric: took 4m25.11688001s to acquireMachinesLock for "default-k8s-diff-port-125614"
	I0731 20:58:41.010557  188266 start.go:96] Skipping create...Using existing machine configuration
	I0731 20:58:41.010566  188266 fix.go:54] fixHost starting: 
	I0731 20:58:41.010992  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:58:41.011033  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:58:41.030450  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41629
	I0731 20:58:41.030910  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:58:41.031360  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:58:41.031382  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:58:41.031703  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:58:41.031859  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:58:41.032020  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetState
	I0731 20:58:41.033653  188266 fix.go:112] recreateIfNeeded on default-k8s-diff-port-125614: state=Stopped err=<nil>
	I0731 20:58:41.033695  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	W0731 20:58:41.033872  188266 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 20:58:41.035898  188266 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-125614" ...
	I0731 20:58:39.650969  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.651458  188133 main.go:141] libmachine: (no-preload-916885) Found IP for machine: 192.168.72.239
	I0731 20:58:39.651475  188133 main.go:141] libmachine: (no-preload-916885) Reserving static IP address...
	I0731 20:58:39.651516  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has current primary IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.651957  188133 main.go:141] libmachine: (no-preload-916885) Reserved static IP address: 192.168.72.239
	I0731 20:58:39.651995  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "no-preload-916885", mac: "52:54:00:46:b1:6a", ip: "192.168.72.239"} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:39.652023  188133 main.go:141] libmachine: (no-preload-916885) Waiting for SSH to be available...
	I0731 20:58:39.652054  188133 main.go:141] libmachine: (no-preload-916885) DBG | skip adding static IP to network mk-no-preload-916885 - found existing host DHCP lease matching {name: "no-preload-916885", mac: "52:54:00:46:b1:6a", ip: "192.168.72.239"}
	I0731 20:58:39.652073  188133 main.go:141] libmachine: (no-preload-916885) DBG | Getting to WaitForSSH function...
	I0731 20:58:39.654095  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.654450  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:39.654479  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.654636  188133 main.go:141] libmachine: (no-preload-916885) DBG | Using SSH client type: external
	I0731 20:58:39.654659  188133 main.go:141] libmachine: (no-preload-916885) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa (-rw-------)
	I0731 20:58:39.654714  188133 main.go:141] libmachine: (no-preload-916885) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.239 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:58:39.654729  188133 main.go:141] libmachine: (no-preload-916885) DBG | About to run SSH command:
	I0731 20:58:39.654768  188133 main.go:141] libmachine: (no-preload-916885) DBG | exit 0
	I0731 20:58:39.781409  188133 main.go:141] libmachine: (no-preload-916885) DBG | SSH cmd err, output: <nil>: 
	I0731 20:58:39.781741  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetConfigRaw
	I0731 20:58:39.782349  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetIP
	I0731 20:58:39.784813  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.785234  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:39.785266  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.785643  188133 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/config.json ...
	I0731 20:58:39.785859  188133 machine.go:94] provisionDockerMachine start ...
	I0731 20:58:39.785879  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:39.786095  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:39.788573  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.788840  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:39.788868  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.789025  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:39.789203  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:39.789374  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:39.789495  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:39.789661  188133 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:39.789927  188133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.239 22 <nil> <nil>}
	I0731 20:58:39.789941  188133 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 20:58:39.901661  188133 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 20:58:39.901687  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetMachineName
	I0731 20:58:39.901920  188133 buildroot.go:166] provisioning hostname "no-preload-916885"
	I0731 20:58:39.901953  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetMachineName
	I0731 20:58:39.902142  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:39.904763  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.905159  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:39.905186  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.905347  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:39.905534  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:39.905698  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:39.905822  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:39.905977  188133 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:39.906137  188133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.239 22 <nil> <nil>}
	I0731 20:58:39.906155  188133 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-916885 && echo "no-preload-916885" | sudo tee /etc/hostname
	I0731 20:58:40.030955  188133 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-916885
	
	I0731 20:58:40.030979  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.033905  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.034254  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.034276  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.034487  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:40.034693  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.034868  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.035014  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:40.035197  188133 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:40.035373  188133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.239 22 <nil> <nil>}
	I0731 20:58:40.035392  188133 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-916885' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-916885/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-916885' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:58:40.154331  188133 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:58:40.154381  188133 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 20:58:40.154436  188133 buildroot.go:174] setting up certificates
	I0731 20:58:40.154452  188133 provision.go:84] configureAuth start
	I0731 20:58:40.154474  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetMachineName
	I0731 20:58:40.154813  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetIP
	I0731 20:58:40.157702  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.158053  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.158075  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.158218  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.160715  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.161030  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.161048  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.161186  188133 provision.go:143] copyHostCerts
	I0731 20:58:40.161258  188133 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 20:58:40.161267  188133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 20:58:40.161372  188133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 20:58:40.161477  188133 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 20:58:40.161487  188133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 20:58:40.161520  188133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 20:58:40.161590  188133 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 20:58:40.161606  188133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 20:58:40.161639  188133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 20:58:40.161700  188133 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.no-preload-916885 san=[127.0.0.1 192.168.72.239 localhost minikube no-preload-916885]
	I0731 20:58:40.341529  188133 provision.go:177] copyRemoteCerts
	I0731 20:58:40.341586  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:58:40.341612  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.344557  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.344851  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.344871  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.345080  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:40.345266  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.345432  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:40.345677  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 20:58:40.431395  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:58:40.455012  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 20:58:40.477721  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 20:58:40.500174  188133 provision.go:87] duration metric: took 345.705192ms to configureAuth
	I0731 20:58:40.500203  188133 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:58:40.500377  188133 config.go:182] Loaded profile config "no-preload-916885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 20:58:40.500462  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.503077  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.503438  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.503467  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.503586  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:40.503780  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.503947  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.504065  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:40.504245  188133 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:40.504467  188133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.239 22 <nil> <nil>}
	I0731 20:58:40.504489  188133 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:58:40.765409  188133 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:58:40.765448  188133 machine.go:97] duration metric: took 979.574417ms to provisionDockerMachine
	I0731 20:58:40.765460  188133 start.go:293] postStartSetup for "no-preload-916885" (driver="kvm2")
	I0731 20:58:40.765474  188133 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:58:40.765525  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:40.765895  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:58:40.765928  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.768314  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.768610  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.768657  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.768760  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:40.768926  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.769089  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:40.769199  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 20:58:40.855821  188133 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:58:40.860032  188133 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:58:40.860071  188133 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 20:58:40.860148  188133 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 20:58:40.860251  188133 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 20:58:40.860367  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:58:40.869291  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:58:40.892945  188133 start.go:296] duration metric: took 127.469545ms for postStartSetup
	I0731 20:58:40.892991  188133 fix.go:56] duration metric: took 21.083232755s for fixHost
	I0731 20:58:40.893019  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.895784  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.896166  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.896197  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.896316  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:40.896501  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.896654  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.896777  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:40.896964  188133 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:40.897133  188133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.239 22 <nil> <nil>}
	I0731 20:58:40.897143  188133 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:58:41.010330  188133 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722459520.969906971
	
	I0731 20:58:41.010352  188133 fix.go:216] guest clock: 1722459520.969906971
	I0731 20:58:41.010360  188133 fix.go:229] Guest: 2024-07-31 20:58:40.969906971 +0000 UTC Remote: 2024-07-31 20:58:40.892995844 +0000 UTC m=+276.656012666 (delta=76.911127ms)
	I0731 20:58:41.010390  188133 fix.go:200] guest clock delta is within tolerance: 76.911127ms
	I0731 20:58:41.010396  188133 start.go:83] releasing machines lock for "no-preload-916885", held for 21.200662427s
	I0731 20:58:41.010429  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:41.010733  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetIP
	I0731 20:58:41.013519  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.013841  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:41.013867  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.014034  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:41.014637  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:41.014829  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:41.014914  188133 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:58:41.014974  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:41.015051  188133 ssh_runner.go:195] Run: cat /version.json
	I0731 20:58:41.015074  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:41.017813  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.017837  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.018170  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:41.018205  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:41.018225  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.018239  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.018482  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:41.018493  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:41.018678  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:41.018694  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:41.018862  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:41.018885  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:41.018965  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 20:58:41.019040  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 20:58:41.107999  188133 ssh_runner.go:195] Run: systemctl --version
	I0731 20:58:41.133039  188133 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:58:41.279485  188133 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:58:41.285765  188133 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:58:41.285838  188133 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:58:41.302175  188133 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:58:41.302203  188133 start.go:495] detecting cgroup driver to use...
	I0731 20:58:41.302280  188133 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:58:41.319896  188133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:58:41.334618  188133 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:58:41.334689  188133 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:58:41.348292  188133 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:58:41.363968  188133 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:58:41.472992  188133 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:58:41.605581  188133 docker.go:233] disabling docker service ...
	I0731 20:58:41.605669  188133 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:58:41.620414  188133 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:58:41.632951  188133 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:58:41.783942  188133 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:58:41.912311  188133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:58:41.931076  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:58:41.954672  188133 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0731 20:58:41.954752  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:41.967478  188133 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:58:41.967567  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:41.978990  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:41.991689  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:42.003168  188133 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:58:42.019114  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:42.034607  188133 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:42.057543  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:42.070420  188133 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:58:42.081173  188133 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:58:42.081245  188133 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:58:42.095455  188133 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:58:42.106943  188133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:58:42.221724  188133 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:58:42.375966  188133 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:58:42.376051  188133 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:58:42.381473  188133 start.go:563] Will wait 60s for crictl version
	I0731 20:58:42.381548  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:42.385364  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:58:42.426783  188133 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:58:42.426872  188133 ssh_runner.go:195] Run: crio --version
	I0731 20:58:42.459096  188133 ssh_runner.go:195] Run: crio --version
	I0731 20:58:42.490043  188133 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0731 20:58:42.491578  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetIP
	I0731 20:58:42.494915  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:42.495289  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:42.495310  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:42.495610  188133 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0731 20:58:42.500266  188133 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:58:42.515164  188133 kubeadm.go:883] updating cluster {Name:no-preload-916885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-916885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.239 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:58:42.515295  188133 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 20:58:42.515332  188133 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:58:42.551930  188133 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0731 20:58:42.551961  188133 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 20:58:42.552025  188133 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:58:42.552047  188133 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0731 20:58:42.552067  188133 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 20:58:42.552087  188133 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 20:58:42.552071  188133 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 20:58:42.552028  188133 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 20:58:42.552129  188133 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 20:58:42.552035  188133 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0731 20:58:42.554026  188133 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 20:58:42.554044  188133 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 20:58:42.554103  188133 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0731 20:58:42.554112  188133 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0731 20:58:42.554123  188133 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:58:42.554030  188133 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 20:58:42.554032  188133 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 20:58:42.554027  188133 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 20:58:42.721659  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0731 20:58:42.743910  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0731 20:58:42.750941  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0731 20:58:42.772074  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 20:58:42.781921  188133 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0731 20:58:42.781964  188133 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 20:58:42.782014  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:42.793926  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 20:58:42.813112  188133 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0731 20:58:42.813154  188133 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0731 20:58:42.813202  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:42.916544  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 20:58:42.937647  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 20:58:42.948145  188133 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0731 20:58:42.948194  188133 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 20:58:42.948208  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0731 20:58:42.948237  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:42.948268  188133 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0731 20:58:42.948300  188133 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 20:58:42.948338  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0731 20:58:42.948341  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:43.006187  188133 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0731 20:58:43.006238  188133 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 20:58:43.006295  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:43.045484  188133 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0731 20:58:43.045541  188133 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 20:58:43.045585  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:43.045589  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 20:58:43.045643  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0731 20:58:43.045710  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0731 20:58:43.045730  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0731 20:58:43.045741  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 20:58:43.045780  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 20:58:43.045823  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0731 20:58:43.122382  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0731 20:58:43.122429  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0731 20:58:43.122449  188133 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0731 20:58:43.122489  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 20:58:43.122497  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0731 20:58:43.122513  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0731 20:58:43.122517  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0731 20:58:43.122588  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 20:58:43.122637  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 20:58:43.122643  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0731 20:58:43.122731  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 20:58:43.522969  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:58:41.037393  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Start
	I0731 20:58:41.037575  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Ensuring networks are active...
	I0731 20:58:41.038366  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Ensuring network default is active
	I0731 20:58:41.038703  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Ensuring network mk-default-k8s-diff-port-125614 is active
	I0731 20:58:41.039402  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Getting domain xml...
	I0731 20:58:41.040218  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Creating domain...
	I0731 20:58:42.319123  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting to get IP...
	I0731 20:58:42.320314  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.320801  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.320908  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:42.320797  189429 retry.go:31] will retry after 274.801111ms: waiting for machine to come up
	I0731 20:58:42.597444  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.597866  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.597914  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:42.597842  189429 retry.go:31] will retry after 382.328248ms: waiting for machine to come up
	I0731 20:58:42.981533  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.982018  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.982051  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:42.981955  189429 retry.go:31] will retry after 426.247953ms: waiting for machine to come up
	I0731 20:58:43.409523  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:43.409839  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:43.409867  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:43.409795  189429 retry.go:31] will retry after 483.501118ms: waiting for machine to come up
	I0731 20:58:43.894451  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:43.894844  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:43.894874  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:43.894779  189429 retry.go:31] will retry after 759.968593ms: waiting for machine to come up
	I0731 20:58:44.656097  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:44.656551  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:44.656580  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:44.656503  189429 retry.go:31] will retry after 766.563008ms: waiting for machine to come up
	I0731 20:58:45.424382  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:45.424793  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:45.424831  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:45.424744  189429 retry.go:31] will retry after 1.172047019s: waiting for machine to come up
	I0731 20:58:45.107333  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.984807614s)
	I0731 20:58:45.107368  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0731 20:58:45.107393  188133 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0731 20:58:45.107452  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0731 20:58:45.107471  188133 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0: (1.98485492s)
	I0731 20:58:45.107523  188133 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.985012474s)
	I0731 20:58:45.107534  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0731 20:58:45.107560  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0731 20:58:45.107563  188133 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.984910291s)
	I0731 20:58:45.107585  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0731 20:58:45.107609  188133 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.984862504s)
	I0731 20:58:45.107619  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 20:58:45.107626  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0731 20:58:45.107668  188133 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.584674739s)
	I0731 20:58:45.107701  188133 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0731 20:58:45.107729  188133 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:58:45.107761  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:48.706832  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.599347822s)
	I0731 20:58:48.706872  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0731 20:58:48.706886  188133 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (3.599247467s)
	I0731 20:58:48.706923  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0731 20:58:48.706898  188133 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 20:58:48.706925  188133 ssh_runner.go:235] Completed: which crictl: (3.599146318s)
	I0731 20:58:48.706979  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:58:48.706980  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 20:58:48.747292  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 20:58:48.747415  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0731 20:58:46.598636  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:46.599086  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:46.599117  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:46.599033  189429 retry.go:31] will retry after 1.204122239s: waiting for machine to come up
	I0731 20:58:47.805441  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:47.805922  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:47.805953  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:47.805864  189429 retry.go:31] will retry after 1.466632244s: waiting for machine to come up
	I0731 20:58:49.274527  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:49.275004  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:49.275030  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:49.274961  189429 retry.go:31] will retry after 2.04848438s: waiting for machine to come up
	I0731 20:58:50.902082  188133 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.154633427s)
	I0731 20:58:50.902138  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0731 20:58:50.902203  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.195118092s)
	I0731 20:58:50.902226  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0731 20:58:50.902259  188133 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 20:58:50.902320  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 20:58:52.863335  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.960989386s)
	I0731 20:58:52.863370  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0731 20:58:52.863394  188133 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 20:58:52.863434  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 20:58:51.324633  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:51.325056  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:51.325080  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:51.324983  189429 retry.go:31] will retry after 1.991151757s: waiting for machine to come up
	I0731 20:58:53.318784  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:53.319133  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:53.319164  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:53.319077  189429 retry.go:31] will retry after 2.631932264s: waiting for machine to come up
	I0731 20:58:54.629811  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.766355185s)
	I0731 20:58:54.629840  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0731 20:58:54.629882  188133 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 20:58:54.629954  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 20:58:55.983610  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.353622135s)
	I0731 20:58:55.983655  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0731 20:58:55.983692  188133 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0731 20:58:55.983764  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0731 20:58:56.828512  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0731 20:58:56.828560  188133 cache_images.go:123] Successfully loaded all cached images
	I0731 20:58:56.828568  188133 cache_images.go:92] duration metric: took 14.276593942s to LoadCachedImages
	I0731 20:58:56.828583  188133 kubeadm.go:934] updating node { 192.168.72.239 8443 v1.31.0-beta.0 crio true true} ...
	I0731 20:58:56.828722  188133 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-916885 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.239
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-916885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:58:56.828806  188133 ssh_runner.go:195] Run: crio config
	I0731 20:58:56.877187  188133 cni.go:84] Creating CNI manager for ""
	I0731 20:58:56.877222  188133 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:58:56.877245  188133 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:58:56.877269  188133 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.239 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-916885 NodeName:no-preload-916885 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.239"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.239 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 20:58:56.877442  188133 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.239
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-916885"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.239
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.239"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:58:56.877526  188133 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0731 20:58:56.887721  188133 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:58:56.887796  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 20:58:56.896845  188133 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0731 20:58:56.912886  188133 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0731 20:58:56.928914  188133 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0731 20:58:56.945604  188133 ssh_runner.go:195] Run: grep 192.168.72.239	control-plane.minikube.internal$ /etc/hosts
	I0731 20:58:56.949538  188133 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.239	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:58:56.961490  188133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:58:57.075114  188133 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:58:57.091701  188133 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885 for IP: 192.168.72.239
	I0731 20:58:57.091724  188133 certs.go:194] generating shared ca certs ...
	I0731 20:58:57.091743  188133 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:58:57.091909  188133 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 20:58:57.091959  188133 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 20:58:57.091971  188133 certs.go:256] generating profile certs ...
	I0731 20:58:57.092062  188133 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/client.key
	I0731 20:58:57.092141  188133 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/apiserver.key.cc7e9c96
	I0731 20:58:57.092193  188133 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/proxy-client.key
	I0731 20:58:57.092330  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 20:58:57.092405  188133 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 20:58:57.092423  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 20:58:57.092458  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:58:57.092489  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:58:57.092520  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 20:58:57.092586  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:58:57.093296  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:58:57.139431  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:58:57.169132  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:58:57.196541  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 20:58:57.232826  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 20:58:57.260875  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 20:58:57.290195  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:58:57.316645  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 20:58:57.339741  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:58:57.362406  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 20:58:57.385009  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 20:58:57.407540  188133 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:58:57.423697  188133 ssh_runner.go:195] Run: openssl version
	I0731 20:58:57.429741  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:58:57.440545  188133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:58:57.444984  188133 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:58:57.445035  188133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:58:57.450651  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:58:57.460547  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 20:58:57.470575  188133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 20:58:57.474939  188133 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 20:58:57.474988  188133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 20:58:57.480481  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 20:58:57.490404  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 20:58:57.500433  188133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 20:58:57.504785  188133 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 20:58:57.504835  188133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 20:58:57.510165  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:58:57.520019  188133 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:58:57.524596  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 20:58:57.530667  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 20:58:57.536315  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 20:58:57.542049  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 20:58:57.547594  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 20:58:57.553084  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 20:58:57.558419  188133 kubeadm.go:392] StartCluster: {Name:no-preload-916885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-916885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.239 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:58:57.558501  188133 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:58:57.558537  188133 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:58:57.600004  188133 cri.go:89] found id: ""
	I0731 20:58:57.600087  188133 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 20:58:57.609911  188133 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 20:58:57.609933  188133 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 20:58:57.609975  188133 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 20:58:57.619498  188133 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:58:57.621885  188133 kubeconfig.go:125] found "no-preload-916885" server: "https://192.168.72.239:8443"
	I0731 20:58:57.624838  188133 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 20:58:57.633984  188133 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.239
	I0731 20:58:57.634025  188133 kubeadm.go:1160] stopping kube-system containers ...
	I0731 20:58:57.634037  188133 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 20:58:57.634080  188133 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:58:57.672988  188133 cri.go:89] found id: ""
	I0731 20:58:57.673053  188133 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 20:58:57.689149  188133 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:58:57.698520  188133 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:58:57.698541  188133 kubeadm.go:157] found existing configuration files:
	
	I0731 20:58:57.698595  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 20:58:57.707106  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:58:57.707163  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:58:57.715878  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 20:58:57.724169  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:58:57.724219  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:58:57.732890  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 20:58:57.741121  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:58:57.741174  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:58:57.749776  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 20:58:57.758063  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:58:57.758114  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:58:57.766815  188133 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 20:58:57.775595  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:58:57.883689  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:58:58.740684  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:58:58.926231  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:58:58.987089  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:58:59.049782  188133 api_server.go:52] waiting for apiserver process to appear ...
	I0731 20:58:59.049862  188133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:00.418227  188656 start.go:364] duration metric: took 3m46.480116699s to acquireMachinesLock for "old-k8s-version-239115"
	I0731 20:59:00.418294  188656 start.go:96] Skipping create...Using existing machine configuration
	I0731 20:59:00.418302  188656 fix.go:54] fixHost starting: 
	I0731 20:59:00.418738  188656 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:00.418773  188656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:00.438533  188656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37127
	I0731 20:59:00.438963  188656 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:00.439499  188656 main.go:141] libmachine: Using API Version  1
	I0731 20:59:00.439524  188656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:00.439930  188656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:00.441449  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:00.441651  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetState
	I0731 20:59:00.443465  188656 fix.go:112] recreateIfNeeded on old-k8s-version-239115: state=Stopped err=<nil>
	I0731 20:59:00.443505  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	W0731 20:59:00.443679  188656 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 20:59:00.445840  188656 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-239115" ...
	I0731 20:58:55.953940  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:55.954393  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:55.954422  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:55.954356  189429 retry.go:31] will retry after 3.068212527s: waiting for machine to come up
	I0731 20:58:59.025966  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.026388  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has current primary IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.026406  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Found IP for machine: 192.168.50.221
	I0731 20:58:59.026417  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Reserving static IP address...
	I0731 20:58:59.026867  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Reserved static IP address: 192.168.50.221
	I0731 20:58:59.026918  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-125614", mac: "52:54:00:c8:c7:f0", ip: "192.168.50.221"} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.026933  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for SSH to be available...
	I0731 20:58:59.026954  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | skip adding static IP to network mk-default-k8s-diff-port-125614 - found existing host DHCP lease matching {name: "default-k8s-diff-port-125614", mac: "52:54:00:c8:c7:f0", ip: "192.168.50.221"}
	I0731 20:58:59.026972  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Getting to WaitForSSH function...
	I0731 20:58:59.029330  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.029685  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.029720  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.029820  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Using SSH client type: external
	I0731 20:58:59.029846  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa (-rw-------)
	I0731 20:58:59.029877  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.221 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:58:59.029894  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | About to run SSH command:
	I0731 20:58:59.029906  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | exit 0
	I0731 20:58:59.161209  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | SSH cmd err, output: <nil>: 
	I0731 20:58:59.161713  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetConfigRaw
	I0731 20:58:59.162331  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetIP
	I0731 20:58:59.164645  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.164953  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.164986  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.165269  188266 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/config.json ...
	I0731 20:58:59.165479  188266 machine.go:94] provisionDockerMachine start ...
	I0731 20:58:59.165503  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:58:59.165692  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.167796  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.168065  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.168110  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.168247  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:58:59.168408  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.168626  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.168763  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:58:59.168901  188266 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:59.169103  188266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0731 20:58:59.169115  188266 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 20:58:59.281875  188266 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 20:58:59.281901  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetMachineName
	I0731 20:58:59.282185  188266 buildroot.go:166] provisioning hostname "default-k8s-diff-port-125614"
	I0731 20:58:59.282218  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetMachineName
	I0731 20:58:59.282460  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.284970  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.285461  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.285498  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.285612  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:58:59.285814  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.286004  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.286139  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:58:59.286278  188266 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:59.286445  188266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0731 20:58:59.286460  188266 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-125614 && echo "default-k8s-diff-port-125614" | sudo tee /etc/hostname
	I0731 20:58:59.411873  188266 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-125614
	
	I0731 20:58:59.411904  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.414733  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.415065  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.415099  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.415274  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:58:59.415463  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.415604  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.415751  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:58:59.415898  188266 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:59.416074  188266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0731 20:58:59.416090  188266 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-125614' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-125614/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-125614' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:58:59.539168  188266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:58:59.539210  188266 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 20:58:59.539247  188266 buildroot.go:174] setting up certificates
	I0731 20:58:59.539256  188266 provision.go:84] configureAuth start
	I0731 20:58:59.539267  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetMachineName
	I0731 20:58:59.539595  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetIP
	I0731 20:58:59.542447  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.542887  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.542916  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.543103  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.545597  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.545972  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.545992  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.546128  188266 provision.go:143] copyHostCerts
	I0731 20:58:59.546195  188266 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 20:58:59.546206  188266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 20:58:59.546265  188266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 20:58:59.546366  188266 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 20:58:59.546377  188266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 20:58:59.546407  188266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 20:58:59.546488  188266 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 20:58:59.546492  188266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 20:58:59.546517  188266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 20:58:59.546565  188266 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-125614 san=[127.0.0.1 192.168.50.221 default-k8s-diff-port-125614 localhost minikube]
	I0731 20:58:59.690753  188266 provision.go:177] copyRemoteCerts
	I0731 20:58:59.690811  188266 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:58:59.690839  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.693800  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.694141  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.694175  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.694383  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:58:59.694583  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.694748  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:58:59.694884  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:58:59.783710  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:58:59.814512  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0731 20:58:59.843492  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 20:58:59.867793  188266 provision.go:87] duration metric: took 328.521723ms to configureAuth
	I0731 20:58:59.867840  188266 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:58:59.868013  188266 config.go:182] Loaded profile config "default-k8s-diff-port-125614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:58:59.868089  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.871214  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.871655  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.871684  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.871875  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:58:59.872127  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.872309  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.872503  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:58:59.872687  188266 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:59.872909  188266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0731 20:58:59.872935  188266 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:59:00.165458  188266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:59:00.165492  188266 machine.go:97] duration metric: took 999.996831ms to provisionDockerMachine
	I0731 20:59:00.165509  188266 start.go:293] postStartSetup for "default-k8s-diff-port-125614" (driver="kvm2")
	I0731 20:59:00.165527  188266 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:59:00.165549  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:00.165936  188266 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:59:00.165973  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:00.168477  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.168837  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:00.168864  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.168991  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:00.169203  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:00.169387  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:00.169575  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:00.262132  188266 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:59:00.266596  188266 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:59:00.266621  188266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 20:59:00.266695  188266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 20:59:00.266789  188266 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 20:59:00.266909  188266 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:59:00.276407  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:00.300017  188266 start.go:296] duration metric: took 134.490488ms for postStartSetup
	I0731 20:59:00.300061  188266 fix.go:56] duration metric: took 19.289494966s for fixHost
	I0731 20:59:00.300087  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:00.302714  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.303073  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:00.303106  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.303249  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:00.303448  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:00.303633  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:00.303786  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:00.303978  188266 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:00.304204  188266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0731 20:59:00.304217  188266 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:59:00.418073  188266 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722459540.389901096
	
	I0731 20:59:00.418096  188266 fix.go:216] guest clock: 1722459540.389901096
	I0731 20:59:00.418105  188266 fix.go:229] Guest: 2024-07-31 20:59:00.389901096 +0000 UTC Remote: 2024-07-31 20:59:00.30006642 +0000 UTC m=+284.542031804 (delta=89.834676ms)
	I0731 20:59:00.418130  188266 fix.go:200] guest clock delta is within tolerance: 89.834676ms
	I0731 20:59:00.418138  188266 start.go:83] releasing machines lock for "default-k8s-diff-port-125614", held for 19.407605953s
	I0731 20:59:00.418167  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:00.418669  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetIP
	I0731 20:59:00.421683  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.422050  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:00.422090  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.422234  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:00.422799  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:00.422999  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:00.423061  188266 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:59:00.423119  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:00.423354  188266 ssh_runner.go:195] Run: cat /version.json
	I0731 20:59:00.423378  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:00.426188  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.426362  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.426603  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:00.426631  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.426790  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:00.426882  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:00.426929  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.427019  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:00.427197  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:00.427208  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:00.427363  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:00.427380  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:00.427523  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:00.427668  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:00.511834  188266 ssh_runner.go:195] Run: systemctl --version
	I0731 20:59:00.536649  188266 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:59:00.692463  188266 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:59:00.700344  188266 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:59:00.700413  188266 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:59:00.721837  188266 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:59:00.721863  188266 start.go:495] detecting cgroup driver to use...
	I0731 20:59:00.721940  188266 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:59:00.742477  188266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:59:00.760049  188266 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:59:00.760120  188266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:59:00.777823  188266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:59:00.791680  188266 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:59:00.908094  188266 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:59:01.051284  188266 docker.go:233] disabling docker service ...
	I0731 20:59:01.051379  188266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:59:01.070927  188266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:59:01.083393  188266 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:59:01.223186  188266 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:59:01.355265  188266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:59:01.369810  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:59:01.390523  188266 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 20:59:01.390588  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.401241  188266 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:59:01.401308  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.412049  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.422145  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.432523  188266 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:59:01.442640  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.456933  188266 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.475628  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.486226  188266 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:59:01.496757  188266 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:59:01.496813  188266 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:59:01.510264  188266 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:59:01.520231  188266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:01.636451  188266 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:59:01.784134  188266 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:59:01.784220  188266 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:59:01.788836  188266 start.go:563] Will wait 60s for crictl version
	I0731 20:59:01.788895  188266 ssh_runner.go:195] Run: which crictl
	I0731 20:59:01.793059  188266 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:59:01.840110  188266 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:59:01.840200  188266 ssh_runner.go:195] Run: crio --version
	I0731 20:59:01.868816  188266 ssh_runner.go:195] Run: crio --version
	I0731 20:59:01.908539  188266 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 20:59:00.447208  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .Start
	I0731 20:59:00.447389  188656 main.go:141] libmachine: (old-k8s-version-239115) Ensuring networks are active...
	I0731 20:59:00.448116  188656 main.go:141] libmachine: (old-k8s-version-239115) Ensuring network default is active
	I0731 20:59:00.448589  188656 main.go:141] libmachine: (old-k8s-version-239115) Ensuring network mk-old-k8s-version-239115 is active
	I0731 20:59:00.448892  188656 main.go:141] libmachine: (old-k8s-version-239115) Getting domain xml...
	I0731 20:59:00.450110  188656 main.go:141] libmachine: (old-k8s-version-239115) Creating domain...
	I0731 20:59:01.823554  188656 main.go:141] libmachine: (old-k8s-version-239115) Waiting to get IP...
	I0731 20:59:01.824648  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:01.825111  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:01.825172  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:01.825080  189574 retry.go:31] will retry after 241.700507ms: waiting for machine to come up
	I0731 20:59:02.068913  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:02.069608  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:02.069738  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:02.069663  189574 retry.go:31] will retry after 258.921821ms: waiting for machine to come up
	I0731 20:59:02.330231  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:02.330846  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:02.330877  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:02.330776  189574 retry.go:31] will retry after 460.911793ms: waiting for machine to come up
	I0731 20:59:02.793718  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:02.794177  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:02.794206  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:02.794156  189574 retry.go:31] will retry after 380.241989ms: waiting for machine to come up
	I0731 20:59:03.175918  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:03.176761  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:03.176786  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:03.176670  189574 retry.go:31] will retry after 631.876736ms: waiting for machine to come up
	I0731 20:59:03.810803  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:03.811478  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:03.811503  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:03.811366  189574 retry.go:31] will retry after 583.328017ms: waiting for machine to come up
	I0731 20:58:59.550347  188133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:00.050077  188133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:00.066942  188133 api_server.go:72] duration metric: took 1.017157745s to wait for apiserver process to appear ...
	I0731 20:59:00.066991  188133 api_server.go:88] waiting for apiserver healthz status ...
	I0731 20:59:00.067016  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:00.067685  188133 api_server.go:269] stopped: https://192.168.72.239:8443/healthz: Get "https://192.168.72.239:8443/healthz": dial tcp 192.168.72.239:8443: connect: connection refused
	I0731 20:59:00.567237  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:03.555694  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:03.555739  188133 api_server.go:103] status: https://192.168.72.239:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:03.555756  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:03.606602  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:03.606641  188133 api_server.go:103] status: https://192.168.72.239:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:03.606657  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:03.617900  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:03.617935  188133 api_server.go:103] status: https://192.168.72.239:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:04.067724  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:04.073838  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:04.073875  188133 api_server.go:103] status: https://192.168.72.239:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:04.568116  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:04.575013  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:04.575044  188133 api_server.go:103] status: https://192.168.72.239:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:05.067154  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:05.073314  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 200:
	ok
	I0731 20:59:05.083559  188133 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 20:59:05.083595  188133 api_server.go:131] duration metric: took 5.016595337s to wait for apiserver health ...
	I0731 20:59:05.083606  188133 cni.go:84] Creating CNI manager for ""
	I0731 20:59:05.083614  188133 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:05.085564  188133 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 20:59:01.910091  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetIP
	I0731 20:59:01.913322  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:01.913714  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:01.913747  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:01.914046  188266 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0731 20:59:01.918504  188266 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:01.930599  188266 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-125614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-125614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:59:01.930756  188266 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:59:01.930826  188266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:01.968796  188266 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 20:59:01.968882  188266 ssh_runner.go:195] Run: which lz4
	I0731 20:59:01.974123  188266 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 20:59:01.979542  188266 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 20:59:01.979575  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 20:59:03.529579  188266 crio.go:462] duration metric: took 1.555502358s to copy over tarball
	I0731 20:59:03.529662  188266 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 20:59:04.395886  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:04.396400  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:04.396664  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:04.396347  189574 retry.go:31] will retry after 1.154504022s: waiting for machine to come up
	I0731 20:59:05.552240  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:05.552879  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:05.552901  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:05.552831  189574 retry.go:31] will retry after 1.037365333s: waiting for machine to come up
	I0731 20:59:06.591875  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:06.592416  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:06.592450  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:06.592329  189574 retry.go:31] will retry after 1.249444079s: waiting for machine to come up
	I0731 20:59:07.843058  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:07.843436  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:07.843463  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:07.843370  189574 retry.go:31] will retry after 1.700521776s: waiting for machine to come up
	I0731 20:59:05.087080  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 20:59:05.105303  188133 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 20:59:05.125019  188133 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 20:59:05.136768  188133 system_pods.go:59] 8 kube-system pods found
	I0731 20:59:05.136823  188133 system_pods.go:61] "coredns-5cfdc65f69-c9gcf" [3b9458d3-81d0-4138-8a6a-81f087c3ed02] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 20:59:05.136836  188133 system_pods.go:61] "etcd-no-preload-916885" [aa31006d-8e74-48c2-9b5d-5604b3a1c283] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 20:59:05.136847  188133 system_pods.go:61] "kube-apiserver-no-preload-916885" [64549ba0-8e30-41d3-82eb-cdb729623a9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 20:59:05.136856  188133 system_pods.go:61] "kube-controller-manager-no-preload-916885" [2620c741-c27a-4df5-8555-58767d43c675] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 20:59:05.136866  188133 system_pods.go:61] "kube-proxy-99jgm" [0060c1a0-badc-401c-a4dc-5cfaa958654e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 20:59:05.136880  188133 system_pods.go:61] "kube-scheduler-no-preload-916885" [f02a0a1d-5cbb-4ee3-a084-21710667565e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 20:59:05.136894  188133 system_pods.go:61] "metrics-server-78fcd8795b-jrzgg" [acbe48be-32e9-44f8-9bf2-52e0e92a09e0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 20:59:05.136912  188133 system_pods.go:61] "storage-provisioner" [d0f902cd-d1db-4c70-bdad-34bda917cec1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 20:59:05.136926  188133 system_pods.go:74] duration metric: took 11.882384ms to wait for pod list to return data ...
	I0731 20:59:05.136937  188133 node_conditions.go:102] verifying NodePressure condition ...
	I0731 20:59:05.142117  188133 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 20:59:05.142149  188133 node_conditions.go:123] node cpu capacity is 2
	I0731 20:59:05.142165  188133 node_conditions.go:105] duration metric: took 5.221098ms to run NodePressure ...
	I0731 20:59:05.142187  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:05.534597  188133 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 20:59:05.539583  188133 kubeadm.go:739] kubelet initialised
	I0731 20:59:05.539604  188133 kubeadm.go:740] duration metric: took 4.980297ms waiting for restarted kubelet to initialise ...
	I0731 20:59:05.539626  188133 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:59:05.544498  188133 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-c9gcf" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:07.778624  188133 pod_ready.go:102] pod "coredns-5cfdc65f69-c9gcf" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:06.024682  188266 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.494984583s)
	I0731 20:59:06.024718  188266 crio.go:469] duration metric: took 2.495107603s to extract the tarball
	I0731 20:59:06.024729  188266 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 20:59:06.062675  188266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:06.107619  188266 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 20:59:06.107649  188266 cache_images.go:84] Images are preloaded, skipping loading
	I0731 20:59:06.107667  188266 kubeadm.go:934] updating node { 192.168.50.221 8444 v1.30.3 crio true true} ...
	I0731 20:59:06.107792  188266 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-125614 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-125614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:59:06.107863  188266 ssh_runner.go:195] Run: crio config
	I0731 20:59:06.173983  188266 cni.go:84] Creating CNI manager for ""
	I0731 20:59:06.174007  188266 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:06.174019  188266 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:59:06.174040  188266 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.221 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-125614 NodeName:default-k8s-diff-port-125614 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 20:59:06.174168  188266 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.221
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-125614"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.221
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.221"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:59:06.174233  188266 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 20:59:06.185059  188266 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:59:06.185189  188266 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 20:59:06.196571  188266 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0731 20:59:06.218964  188266 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:59:06.239033  188266 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0731 20:59:06.260519  188266 ssh_runner.go:195] Run: grep 192.168.50.221	control-plane.minikube.internal$ /etc/hosts
	I0731 20:59:06.264718  188266 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.221	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:06.278173  188266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:06.423941  188266 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:59:06.441663  188266 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614 for IP: 192.168.50.221
	I0731 20:59:06.441689  188266 certs.go:194] generating shared ca certs ...
	I0731 20:59:06.441711  188266 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:06.441906  188266 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 20:59:06.441965  188266 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 20:59:06.441978  188266 certs.go:256] generating profile certs ...
	I0731 20:59:06.442080  188266 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/client.key
	I0731 20:59:06.442157  188266 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/apiserver.key.9cb12361
	I0731 20:59:06.442205  188266 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/proxy-client.key
	I0731 20:59:06.442354  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 20:59:06.442391  188266 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 20:59:06.442404  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 20:59:06.442447  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:59:06.442478  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:59:06.442522  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 20:59:06.442580  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:06.443470  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:59:06.497056  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:59:06.530978  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:59:06.574533  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 20:59:06.619523  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0731 20:59:06.648269  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 20:59:06.677824  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:59:06.704450  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 20:59:06.731606  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 20:59:06.756990  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 20:59:06.781214  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:59:06.804855  188266 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:59:06.821531  188266 ssh_runner.go:195] Run: openssl version
	I0731 20:59:06.827394  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 20:59:06.838680  188266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 20:59:06.843618  188266 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 20:59:06.843681  188266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 20:59:06.850238  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:59:06.865533  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:59:06.881516  188266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:06.886809  188266 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:06.886876  188266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:06.893345  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:59:06.908919  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 20:59:06.922150  188266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 20:59:06.927165  188266 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 20:59:06.927226  188266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 20:59:06.933724  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 20:59:06.946420  188266 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:59:06.951347  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 20:59:06.959595  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 20:59:06.967808  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 20:59:06.977083  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 20:59:06.985089  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 20:59:06.992190  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 20:59:06.998458  188266 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-125614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-125614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:59:06.998548  188266 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:59:06.998592  188266 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:07.053176  188266 cri.go:89] found id: ""
	I0731 20:59:07.053256  188266 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 20:59:07.064373  188266 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 20:59:07.064392  188266 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 20:59:07.064433  188266 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 20:59:07.075167  188266 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:59:07.076057  188266 kubeconfig.go:125] found "default-k8s-diff-port-125614" server: "https://192.168.50.221:8444"
	I0731 20:59:07.078091  188266 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 20:59:07.089136  188266 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.221
	I0731 20:59:07.089161  188266 kubeadm.go:1160] stopping kube-system containers ...
	I0731 20:59:07.089174  188266 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 20:59:07.089225  188266 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:07.133015  188266 cri.go:89] found id: ""
	I0731 20:59:07.133099  188266 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 20:59:07.155229  188266 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:59:07.166326  188266 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:59:07.166348  188266 kubeadm.go:157] found existing configuration files:
	
	I0731 20:59:07.166418  188266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0731 20:59:07.176709  188266 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:59:07.176768  188266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:59:07.187232  188266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0731 20:59:07.197376  188266 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:59:07.197453  188266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:59:07.209451  188266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0731 20:59:07.221141  188266 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:59:07.221205  188266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:59:07.232016  188266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0731 20:59:07.242340  188266 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:59:07.242402  188266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:59:07.253794  188266 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 20:59:07.264912  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:07.382193  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:08.445321  188266 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.063086935s)
	I0731 20:59:08.445364  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:08.664603  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:08.744053  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:08.857284  188266 api_server.go:52] waiting for apiserver process to appear ...
	I0731 20:59:08.857380  188266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:09.357505  188266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:09.857488  188266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:09.887329  188266 api_server.go:72] duration metric: took 1.030046485s to wait for apiserver process to appear ...
	I0731 20:59:09.887358  188266 api_server.go:88] waiting for apiserver healthz status ...
	I0731 20:59:09.887405  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:09.887966  188266 api_server.go:269] stopped: https://192.168.50.221:8444/healthz: Get "https://192.168.50.221:8444/healthz": dial tcp 192.168.50.221:8444: connect: connection refused
	I0731 20:59:10.387674  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:09.545937  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:09.546581  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:09.546605  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:09.546529  189574 retry.go:31] will retry after 1.934269586s: waiting for machine to come up
	I0731 20:59:11.482402  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:11.482794  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:11.482823  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:11.482744  189574 retry.go:31] will retry after 2.575131422s: waiting for machine to come up
	I0731 20:59:10.053236  188133 pod_ready.go:102] pod "coredns-5cfdc65f69-c9gcf" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:10.551437  188133 pod_ready.go:92] pod "coredns-5cfdc65f69-c9gcf" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:10.551467  188133 pod_ready.go:81] duration metric: took 5.006944467s for pod "coredns-5cfdc65f69-c9gcf" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:10.551480  188133 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:12.559346  188133 pod_ready.go:102] pod "etcd-no-preload-916885" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:12.827297  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:12.827342  188266 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:12.827390  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:12.883496  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:12.883538  188266 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:12.887715  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:12.902715  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:12.902746  188266 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:13.388340  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:13.392840  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:13.392872  188266 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:13.888510  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:13.894519  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:13.894553  188266 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:14.388177  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:14.392557  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 200:
	ok
	I0731 20:59:14.399285  188266 api_server.go:141] control plane version: v1.30.3
	I0731 20:59:14.399321  188266 api_server.go:131] duration metric: took 4.511955505s to wait for apiserver health ...
	I0731 20:59:14.399333  188266 cni.go:84] Creating CNI manager for ""
	I0731 20:59:14.399340  188266 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:14.400987  188266 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 20:59:14.401981  188266 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 20:59:14.420648  188266 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 20:59:14.441909  188266 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 20:59:14.451365  188266 system_pods.go:59] 8 kube-system pods found
	I0731 20:59:14.451406  188266 system_pods.go:61] "coredns-7db6d8ff4d-gnrgs" [203ddf96-11cf-4fd3-8920-aa787815ad1a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 20:59:14.451419  188266 system_pods.go:61] "etcd-default-k8s-diff-port-125614" [7b9a74f5-b8df-457d-b4be-26e3f268e74a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 20:59:14.451426  188266 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-125614" [a2db9a6a-1d7f-4bb6-9f54-2d74676bba7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 20:59:14.451432  188266 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-125614" [3cd52038-e450-46ea-91a7-494bdfeb386e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 20:59:14.451438  188266 system_pods.go:61] "kube-proxy-csdc4" [24077c7d-f54c-4a54-9791-742327f2a9d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 20:59:14.451444  188266 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-125614" [4b9b4f87-74a3-4768-b3ca-3226a4db7105] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 20:59:14.451461  188266 system_pods.go:61] "metrics-server-569cc877fc-jf52w" [00b07830-8180-43c0-83c7-e68d399ae0ef] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 20:59:14.451468  188266 system_pods.go:61] "storage-provisioner" [efc60c19-af1b-426e-82e2-5fb9a2d1fb3a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 20:59:14.451476  188266 system_pods.go:74] duration metric: took 9.546534ms to wait for pod list to return data ...
	I0731 20:59:14.451486  188266 node_conditions.go:102] verifying NodePressure condition ...
	I0731 20:59:14.454760  188266 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 20:59:14.454784  188266 node_conditions.go:123] node cpu capacity is 2
	I0731 20:59:14.454795  188266 node_conditions.go:105] duration metric: took 3.303087ms to run NodePressure ...
	I0731 20:59:14.454820  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:14.730635  188266 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 20:59:14.735144  188266 kubeadm.go:739] kubelet initialised
	I0731 20:59:14.735165  188266 kubeadm.go:740] duration metric: took 4.500388ms waiting for restarted kubelet to initialise ...
	I0731 20:59:14.735173  188266 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:59:14.742292  188266 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:14.749460  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.749486  188266 pod_ready.go:81] duration metric: took 7.166399ms for pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:14.749496  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.749504  188266 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:14.757068  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.757091  188266 pod_ready.go:81] duration metric: took 7.579526ms for pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:14.757101  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.757109  188266 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:14.762181  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.762203  188266 pod_ready.go:81] duration metric: took 5.083756ms for pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:14.762213  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.762219  188266 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:14.845070  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.845095  188266 pod_ready.go:81] duration metric: took 82.86894ms for pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:14.845107  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.845113  188266 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-csdc4" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:15.246100  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "kube-proxy-csdc4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:15.246131  188266 pod_ready.go:81] duration metric: took 401.011321ms for pod "kube-proxy-csdc4" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:15.246150  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "kube-proxy-csdc4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:15.246159  188266 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:15.645657  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:15.645689  188266 pod_ready.go:81] duration metric: took 399.519543ms for pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:15.645704  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:15.645713  188266 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:16.045744  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:16.045776  188266 pod_ready.go:81] duration metric: took 400.053102ms for pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:16.045791  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:16.045800  188266 pod_ready.go:38] duration metric: took 1.310615323s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:59:16.045838  188266 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 20:59:16.059046  188266 ops.go:34] apiserver oom_adj: -16
	I0731 20:59:16.059071  188266 kubeadm.go:597] duration metric: took 8.994671774s to restartPrimaryControlPlane
	I0731 20:59:16.059082  188266 kubeadm.go:394] duration metric: took 9.060633072s to StartCluster
	I0731 20:59:16.059104  188266 settings.go:142] acquiring lock: {Name:mk1f43113b3e416d694056e4890ac819b020378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:16.059181  188266 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 20:59:16.060895  188266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:16.061143  188266 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 20:59:16.061226  188266 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 20:59:16.061324  188266 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-125614"
	I0731 20:59:16.061386  188266 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-125614"
	W0731 20:59:16.061399  188266 addons.go:243] addon storage-provisioner should already be in state true
	I0731 20:59:16.061388  188266 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-125614"
	I0731 20:59:16.061400  188266 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-125614"
	I0731 20:59:16.061453  188266 config.go:182] Loaded profile config "default-k8s-diff-port-125614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:59:16.061495  188266 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-125614"
	W0731 20:59:16.061516  188266 addons.go:243] addon metrics-server should already be in state true
	I0731 20:59:16.061438  188266 host.go:66] Checking if "default-k8s-diff-port-125614" exists ...
	I0731 20:59:16.061603  188266 host.go:66] Checking if "default-k8s-diff-port-125614" exists ...
	I0731 20:59:16.061436  188266 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-125614"
	I0731 20:59:16.062072  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.062084  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.062085  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.062110  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.062127  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.062188  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.062822  188266 out.go:177] * Verifying Kubernetes components...
	I0731 20:59:16.064337  188266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:16.081194  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45591
	I0731 20:59:16.081208  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35871
	I0731 20:59:16.081197  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38383
	I0731 20:59:16.081872  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.081956  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.082026  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.082423  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.082439  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.082926  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.082951  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.083047  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.083058  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.083076  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.083712  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.083754  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.084871  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.085484  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.085734  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetState
	I0731 20:59:16.085815  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.085845  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.089827  188266 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-125614"
	W0731 20:59:16.089854  188266 addons.go:243] addon default-storageclass should already be in state true
	I0731 20:59:16.089884  188266 host.go:66] Checking if "default-k8s-diff-port-125614" exists ...
	I0731 20:59:16.090245  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.090301  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.106592  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38845
	I0731 20:59:16.106609  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34433
	I0731 20:59:16.108751  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.108849  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.109414  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.109442  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.109546  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.109576  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.109948  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.109953  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.110132  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetState
	I0731 20:59:16.110163  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetState
	I0731 20:59:16.111216  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36309
	I0731 20:59:16.111657  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.112217  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.112239  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.112319  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:16.113374  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.115608  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:16.115649  188266 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:59:16.115940  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.115979  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.116965  188266 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 20:59:16.117053  188266 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 20:59:16.117069  188266 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 20:59:16.117083  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:16.118247  188266 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 20:59:16.118268  188266 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 20:59:16.118288  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:16.120985  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.121540  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:16.121563  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.121764  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:16.121865  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.122099  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:16.122295  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:16.122371  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:16.122490  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.122552  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:16.122632  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:16.122850  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:16.123024  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:16.123218  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:16.133929  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34157
	I0731 20:59:16.134348  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.134844  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.134865  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.135175  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.135389  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetState
	I0731 20:59:16.136985  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:16.137272  188266 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 20:59:16.137287  188266 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 20:59:16.137313  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:16.140222  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.140543  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:16.140560  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:16.140762  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.140795  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:16.140969  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:16.141107  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:16.257677  188266 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:59:16.275791  188266 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-125614" to be "Ready" ...
	I0731 20:59:16.373528  188266 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 20:59:16.373552  188266 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 20:59:16.380797  188266 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 20:59:16.404028  188266 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 20:59:16.406072  188266 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 20:59:16.406098  188266 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 20:59:16.456003  188266 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 20:59:16.456030  188266 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 20:59:16.517304  188266 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 20:59:17.377438  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.377468  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.377514  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.377565  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.377765  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.377780  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.377790  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.377797  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.377827  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.377835  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.377930  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.378028  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.378028  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Closing plugin on server side
	I0731 20:59:17.378354  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Closing plugin on server side
	I0731 20:59:17.378417  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.378424  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.378569  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.378583  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.384110  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.384130  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.384325  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.384341  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.428457  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.428480  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.428766  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.428782  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.428790  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.428799  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.428804  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Closing plugin on server side
	I0731 20:59:17.429011  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.429024  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.429040  188266 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-125614"
	I0731 20:59:17.431884  188266 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 20:59:14.059385  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:14.059857  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:14.059879  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:14.059819  189574 retry.go:31] will retry after 3.127857327s: waiting for machine to come up
	I0731 20:59:17.189405  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:17.189871  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:17.189902  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:17.189821  189574 retry.go:31] will retry after 4.516767425s: waiting for machine to come up
	I0731 20:59:14.559493  188133 pod_ready.go:102] pod "etcd-no-preload-916885" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:16.561540  188133 pod_ready.go:92] pod "etcd-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:16.561568  188133 pod_ready.go:81] duration metric: took 6.010079286s for pod "etcd-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:16.561580  188133 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.068734  188133 pod_ready.go:92] pod "kube-apiserver-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:18.068756  188133 pod_ready.go:81] duration metric: took 1.507167128s for pod "kube-apiserver-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.068766  188133 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.073069  188133 pod_ready.go:92] pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:18.073086  188133 pod_ready.go:81] duration metric: took 4.313817ms for pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.073095  188133 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-99jgm" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.077480  188133 pod_ready.go:92] pod "kube-proxy-99jgm" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:18.077497  188133 pod_ready.go:81] duration metric: took 4.395483ms for pod "kube-proxy-99jgm" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.077506  188133 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.082197  188133 pod_ready.go:92] pod "kube-scheduler-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:18.082221  188133 pod_ready.go:81] duration metric: took 4.709042ms for pod "kube-scheduler-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.082234  188133 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:17.433072  188266 addons.go:510] duration metric: took 1.371850333s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 20:59:18.280135  188266 node_ready.go:53] node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:20.280881  188266 node_ready.go:53] node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:23.082812  187862 start.go:364] duration metric: took 58.27194035s to acquireMachinesLock for "embed-certs-831240"
	I0731 20:59:23.082866  187862 start.go:96] Skipping create...Using existing machine configuration
	I0731 20:59:23.082875  187862 fix.go:54] fixHost starting: 
	I0731 20:59:23.083267  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:23.083308  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:23.101291  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36679
	I0731 20:59:23.101826  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:23.102464  187862 main.go:141] libmachine: Using API Version  1
	I0731 20:59:23.102498  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:23.102817  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:23.103024  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:23.103187  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetState
	I0731 20:59:23.105117  187862 fix.go:112] recreateIfNeeded on embed-certs-831240: state=Stopped err=<nil>
	I0731 20:59:23.105143  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	W0731 20:59:23.105307  187862 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 20:59:23.106919  187862 out.go:177] * Restarting existing kvm2 VM for "embed-certs-831240" ...
	I0731 20:59:21.708296  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.708811  188656 main.go:141] libmachine: (old-k8s-version-239115) Found IP for machine: 192.168.61.51
	I0731 20:59:21.708846  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has current primary IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.708860  188656 main.go:141] libmachine: (old-k8s-version-239115) Reserving static IP address...
	I0731 20:59:21.709432  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "old-k8s-version-239115", mac: "52:54:00:5a:70:0d", ip: "192.168.61.51"} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.709663  188656 main.go:141] libmachine: (old-k8s-version-239115) Reserved static IP address: 192.168.61.51
	I0731 20:59:21.709695  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | skip adding static IP to network mk-old-k8s-version-239115 - found existing host DHCP lease matching {name: "old-k8s-version-239115", mac: "52:54:00:5a:70:0d", ip: "192.168.61.51"}
	I0731 20:59:21.709711  188656 main.go:141] libmachine: (old-k8s-version-239115) Waiting for SSH to be available...
	I0731 20:59:21.709723  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | Getting to WaitForSSH function...
	I0731 20:59:21.711911  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.712310  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.712345  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.712517  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | Using SSH client type: external
	I0731 20:59:21.712540  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa (-rw-------)
	I0731 20:59:21.712581  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:59:21.712598  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | About to run SSH command:
	I0731 20:59:21.712625  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | exit 0
	I0731 20:59:21.838026  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | SSH cmd err, output: <nil>: 
	I0731 20:59:21.838370  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetConfigRaw
	I0731 20:59:21.839169  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetIP
	I0731 20:59:21.842168  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.842588  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.842623  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.842866  188656 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/config.json ...
	I0731 20:59:21.843126  188656 machine.go:94] provisionDockerMachine start ...
	I0731 20:59:21.843150  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:21.843388  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:21.846148  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.846657  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.846686  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.846993  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:21.847165  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:21.847360  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:21.847530  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:21.847707  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:21.847938  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:21.847951  188656 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 20:59:21.955109  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 20:59:21.955143  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetMachineName
	I0731 20:59:21.955460  188656 buildroot.go:166] provisioning hostname "old-k8s-version-239115"
	I0731 20:59:21.955492  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetMachineName
	I0731 20:59:21.955728  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:21.958752  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.959146  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.959176  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.959395  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:21.959620  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:21.959781  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:21.959918  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:21.960078  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:21.960358  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:21.960378  188656 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-239115 && echo "old-k8s-version-239115" | sudo tee /etc/hostname
	I0731 20:59:22.090625  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-239115
	
	I0731 20:59:22.090665  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.093927  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.094356  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.094387  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.094729  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.094942  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.095153  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.095364  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.095583  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:22.095819  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:22.095845  188656 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-239115' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-239115/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-239115' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:59:22.217153  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:59:22.217189  188656 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 20:59:22.217215  188656 buildroot.go:174] setting up certificates
	I0731 20:59:22.217229  188656 provision.go:84] configureAuth start
	I0731 20:59:22.217242  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetMachineName
	I0731 20:59:22.217613  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetIP
	I0731 20:59:22.220640  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.221082  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.221125  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.221237  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.223811  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.224152  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.224180  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.224337  188656 provision.go:143] copyHostCerts
	I0731 20:59:22.224405  188656 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 20:59:22.224418  188656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 20:59:22.224485  188656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 20:59:22.224604  188656 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 20:59:22.224616  188656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 20:59:22.224654  188656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 20:59:22.224729  188656 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 20:59:22.224740  188656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 20:59:22.224766  188656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 20:59:22.224833  188656 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-239115 san=[127.0.0.1 192.168.61.51 localhost minikube old-k8s-version-239115]
	I0731 20:59:22.407532  188656 provision.go:177] copyRemoteCerts
	I0731 20:59:22.407599  188656 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:59:22.407625  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.410594  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.411007  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.411033  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.411338  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.411582  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.411811  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.412007  188656 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa Username:docker}
	I0731 20:59:22.492781  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:59:22.518278  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0731 20:59:22.543018  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 20:59:22.568888  188656 provision.go:87] duration metric: took 351.643ms to configureAuth
	I0731 20:59:22.568920  188656 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:59:22.569099  188656 config.go:182] Loaded profile config "old-k8s-version-239115": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 20:59:22.569169  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.572154  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.572471  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.572500  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.572669  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.572872  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.572993  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.573112  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.573249  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:22.573481  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:22.573512  188656 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:59:22.847156  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:59:22.847193  188656 machine.go:97] duration metric: took 1.004049055s to provisionDockerMachine
	I0731 20:59:22.847211  188656 start.go:293] postStartSetup for "old-k8s-version-239115" (driver="kvm2")
	I0731 20:59:22.847229  188656 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:59:22.847284  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:22.847710  188656 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:59:22.847741  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.850515  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.850935  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.850962  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.851088  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.851288  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.851524  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.851674  188656 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa Username:docker}
	I0731 20:59:22.932316  188656 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:59:22.936672  188656 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:59:22.936707  188656 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 20:59:22.936792  188656 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 20:59:22.936894  188656 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 20:59:22.937011  188656 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:59:22.946454  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:22.972952  188656 start.go:296] duration metric: took 125.72216ms for postStartSetup
	I0731 20:59:22.972996  188656 fix.go:56] duration metric: took 22.554695114s for fixHost
	I0731 20:59:22.973026  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.975758  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.976166  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.976198  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.976320  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.976585  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.976782  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.976966  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.977115  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:22.977275  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:22.977284  188656 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:59:23.082657  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722459563.026856067
	
	I0731 20:59:23.082683  188656 fix.go:216] guest clock: 1722459563.026856067
	I0731 20:59:23.082694  188656 fix.go:229] Guest: 2024-07-31 20:59:23.026856067 +0000 UTC Remote: 2024-07-31 20:59:22.973000729 +0000 UTC m=+249.171273714 (delta=53.855338ms)
	I0731 20:59:23.082721  188656 fix.go:200] guest clock delta is within tolerance: 53.855338ms
	I0731 20:59:23.082727  188656 start.go:83] releasing machines lock for "old-k8s-version-239115", held for 22.664459101s
	I0731 20:59:23.082752  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:23.083052  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetIP
	I0731 20:59:23.086626  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.087093  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:23.087135  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.087366  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:23.087954  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:23.088159  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:23.088251  188656 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:59:23.088303  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:23.088370  188656 ssh_runner.go:195] Run: cat /version.json
	I0731 20:59:23.088392  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:23.091710  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.091989  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.092073  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:23.092101  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.092227  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:23.092429  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:23.092472  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:23.092520  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.092618  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:23.092752  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:23.092803  188656 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa Username:docker}
	I0731 20:59:23.092931  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:23.093100  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:23.093255  188656 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa Username:docker}
	I0731 20:59:23.175012  188656 ssh_runner.go:195] Run: systemctl --version
	I0731 20:59:23.200192  188656 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:59:23.348227  188656 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:59:23.355109  188656 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:59:23.355195  188656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:59:23.371683  188656 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:59:23.371707  188656 start.go:495] detecting cgroup driver to use...
	I0731 20:59:23.371786  188656 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:59:23.388727  188656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:59:23.408830  188656 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:59:23.408907  188656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:59:23.423594  188656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:59:23.437876  188656 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:59:23.559105  188656 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:59:23.743186  188656 docker.go:233] disabling docker service ...
	I0731 20:59:23.743253  188656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:59:23.758053  188656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:59:23.779951  188656 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:59:20.089173  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:22.092138  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:23.919494  188656 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:59:24.057230  188656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:59:24.072687  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:59:24.094528  188656 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 20:59:24.094600  188656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:24.106579  188656 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:59:24.106634  188656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:24.120079  188656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:24.130759  188656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:24.142925  188656 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:59:24.154760  188656 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:59:24.165059  188656 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:59:24.165113  188656 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:59:24.179567  188656 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:59:24.191838  188656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:24.339078  188656 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:59:24.515723  188656 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:59:24.515810  188656 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:59:24.521882  188656 start.go:563] Will wait 60s for crictl version
	I0731 20:59:24.521966  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:24.527655  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:59:24.581055  188656 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:59:24.581151  188656 ssh_runner.go:195] Run: crio --version
	I0731 20:59:24.623207  188656 ssh_runner.go:195] Run: crio --version
	I0731 20:59:24.662956  188656 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0731 20:59:22.780311  188266 node_ready.go:53] node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:23.281324  188266 node_ready.go:49] node "default-k8s-diff-port-125614" has status "Ready":"True"
	I0731 20:59:23.281373  188266 node_ready.go:38] duration metric: took 7.005540469s for node "default-k8s-diff-port-125614" to be "Ready" ...
	I0731 20:59:23.281387  188266 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:59:23.291207  188266 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.299173  188266 pod_ready.go:92] pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:23.299202  188266 pod_ready.go:81] duration metric: took 7.971632ms for pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.299215  188266 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.307561  188266 pod_ready.go:92] pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:23.307580  188266 pod_ready.go:81] duration metric: took 8.357239ms for pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.307589  188266 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.314466  188266 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:23.314544  188266 pod_ready.go:81] duration metric: took 6.946044ms for pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.314565  188266 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:25.323341  188266 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:23.108292  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Start
	I0731 20:59:23.108473  187862 main.go:141] libmachine: (embed-certs-831240) Ensuring networks are active...
	I0731 20:59:23.109160  187862 main.go:141] libmachine: (embed-certs-831240) Ensuring network default is active
	I0731 20:59:23.109575  187862 main.go:141] libmachine: (embed-certs-831240) Ensuring network mk-embed-certs-831240 is active
	I0731 20:59:23.110032  187862 main.go:141] libmachine: (embed-certs-831240) Getting domain xml...
	I0731 20:59:23.110762  187862 main.go:141] libmachine: (embed-certs-831240) Creating domain...
	I0731 20:59:24.457926  187862 main.go:141] libmachine: (embed-certs-831240) Waiting to get IP...
	I0731 20:59:24.458936  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:24.459381  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:24.459477  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:24.459375  189758 retry.go:31] will retry after 266.695372ms: waiting for machine to come up
	I0731 20:59:24.727938  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:24.728394  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:24.728532  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:24.728451  189758 retry.go:31] will retry after 349.84093ms: waiting for machine to come up
	I0731 20:59:25.080044  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:25.080634  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:25.080668  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:25.080592  189758 retry.go:31] will retry after 324.555122ms: waiting for machine to come up
	I0731 20:59:25.407332  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:25.407852  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:25.407877  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:25.407795  189758 retry.go:31] will retry after 580.815897ms: waiting for machine to come up
	I0731 20:59:25.990957  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:25.991551  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:25.991578  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:25.991468  189758 retry.go:31] will retry after 570.045476ms: waiting for machine to come up
	I0731 20:59:26.563493  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:26.563901  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:26.563931  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:26.563853  189758 retry.go:31] will retry after 582.597352ms: waiting for machine to come up
	I0731 20:59:27.148256  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:27.148744  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:27.148773  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:27.148688  189758 retry.go:31] will retry after 1.105713474s: waiting for machine to come up
	I0731 20:59:24.664851  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetIP
	I0731 20:59:24.668464  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:24.668842  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:24.668869  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:24.669103  188656 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0731 20:59:24.674448  188656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:24.690857  188656 kubeadm.go:883] updating cluster {Name:old-k8s-version-239115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-239115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:59:24.691011  188656 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 20:59:24.691056  188656 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:24.744259  188656 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 20:59:24.744348  188656 ssh_runner.go:195] Run: which lz4
	I0731 20:59:24.749358  188656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 20:59:24.754299  188656 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 20:59:24.754341  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0731 20:59:26.551495  188656 crio.go:462] duration metric: took 1.802206904s to copy over tarball
	I0731 20:59:26.551571  188656 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 20:59:24.589677  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:26.591079  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:29.089923  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:25.824008  188266 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:25.824037  188266 pod_ready.go:81] duration metric: took 2.509461823s for pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:25.824052  188266 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-csdc4" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:25.840569  188266 pod_ready.go:92] pod "kube-proxy-csdc4" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:25.840595  188266 pod_ready.go:81] duration metric: took 16.533543ms for pod "kube-proxy-csdc4" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:25.840613  188266 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:26.103726  188266 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:26.103759  188266 pod_ready.go:81] duration metric: took 263.1364ms for pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:26.103774  188266 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:28.112583  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:30.610462  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:28.255818  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:28.256478  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:28.256506  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:28.256408  189758 retry.go:31] will retry after 1.3552249s: waiting for machine to come up
	I0731 20:59:29.613070  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:29.613661  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:29.613693  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:29.613620  189758 retry.go:31] will retry after 1.522319436s: waiting for machine to come up
	I0731 20:59:31.138020  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:31.138490  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:31.138522  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:31.138434  189758 retry.go:31] will retry after 1.573723862s: waiting for machine to come up
	I0731 20:59:29.653941  188656 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.102337952s)
	I0731 20:59:29.653974  188656 crio.go:469] duration metric: took 3.102444338s to extract the tarball
	I0731 20:59:29.653982  188656 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 20:59:29.704065  188656 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:29.745966  188656 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 20:59:29.746010  188656 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 20:59:29.746076  188656 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:59:29.746107  188656 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:29.746129  188656 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:29.746149  188656 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0731 20:59:29.746170  188656 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0731 20:59:29.746410  188656 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:29.746423  188656 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:29.746735  188656 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:29.747951  188656 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 20:59:29.747978  188656 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0731 20:59:29.747978  188656 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:29.747998  188656 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:29.748005  188656 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:29.747951  188656 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:59:29.748021  188656 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:29.748091  188656 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:29.915865  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:29.918049  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:29.950840  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:29.952762  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:29.956317  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:29.959905  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0731 20:59:30.000707  188656 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0731 20:59:30.000768  188656 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:30.000821  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.007207  188656 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0731 20:59:30.007251  188656 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:30.007294  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.016613  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0731 20:59:30.082306  188656 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0731 20:59:30.082358  188656 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:30.082364  188656 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0731 20:59:30.082414  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.082418  188656 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:30.082557  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.089299  188656 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0731 20:59:30.089382  188656 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:30.089427  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.105150  188656 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0731 20:59:30.105217  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:30.105246  188656 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0731 20:59:30.105264  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:30.105282  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.129702  188656 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0731 20:59:30.129748  188656 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0731 20:59:30.129779  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:30.129826  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:30.129853  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:30.129800  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.188192  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 20:59:30.188243  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0731 20:59:30.188342  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0731 20:59:30.188365  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0731 20:59:30.268231  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0731 20:59:30.268296  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0731 20:59:30.268337  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0731 20:59:30.287822  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0731 20:59:30.287929  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0731 20:59:30.635440  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:59:30.776879  188656 cache_images.go:92] duration metric: took 1.030849977s to LoadCachedImages
	W0731 20:59:30.777006  188656 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0731 20:59:30.777028  188656 kubeadm.go:934] updating node { 192.168.61.51 8443 v1.20.0 crio true true} ...
	I0731 20:59:30.777175  188656 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-239115 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:59:30.777284  188656 ssh_runner.go:195] Run: crio config
	I0731 20:59:30.832542  188656 cni.go:84] Creating CNI manager for ""
	I0731 20:59:30.832570  188656 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:30.832586  188656 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:59:30.832618  188656 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-239115 NodeName:old-k8s-version-239115 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 20:59:30.832798  188656 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-239115"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:59:30.832877  188656 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0731 20:59:30.842909  188656 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:59:30.842995  188656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 20:59:30.852951  188656 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0731 20:59:30.872643  188656 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:59:30.889851  188656 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0731 20:59:30.910958  188656 ssh_runner.go:195] Run: grep 192.168.61.51	control-plane.minikube.internal$ /etc/hosts
	I0731 20:59:30.915645  188656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:30.928698  188656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:31.055628  188656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:59:31.076731  188656 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115 for IP: 192.168.61.51
	I0731 20:59:31.076759  188656 certs.go:194] generating shared ca certs ...
	I0731 20:59:31.076789  188656 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:31.076979  188656 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 20:59:31.077041  188656 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 20:59:31.077057  188656 certs.go:256] generating profile certs ...
	I0731 20:59:31.077175  188656 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/client.key
	I0731 20:59:31.077378  188656 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/apiserver.key.072d7f83
	I0731 20:59:31.077514  188656 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/proxy-client.key
	I0731 20:59:31.077704  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 20:59:31.077789  188656 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 20:59:31.077806  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 20:59:31.077854  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:59:31.077892  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:59:31.077932  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 20:59:31.077997  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:31.078906  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:59:31.126980  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:59:31.167327  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:59:31.211947  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 20:59:31.258307  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 20:59:31.296628  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 20:59:31.342330  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:59:31.391114  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 20:59:31.415097  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 20:59:31.442595  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 20:59:31.472160  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:59:31.497814  188656 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:59:31.515890  188656 ssh_runner.go:195] Run: openssl version
	I0731 20:59:31.523423  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:59:31.537984  188656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:31.544161  188656 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:31.544225  188656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:31.552590  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:59:31.567190  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 20:59:31.581206  188656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 20:59:31.586903  188656 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 20:59:31.586966  188656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 20:59:31.593485  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 20:59:31.606764  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 20:59:31.619748  188656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 20:59:31.624599  188656 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 20:59:31.624681  188656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 20:59:31.631293  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:59:31.642823  188656 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:59:31.647273  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 20:59:31.653142  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 20:59:31.659046  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 20:59:31.665552  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 20:59:31.671454  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 20:59:31.677426  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 20:59:31.683490  188656 kubeadm.go:392] StartCluster: {Name:old-k8s-version-239115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-239115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:59:31.683586  188656 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:59:31.683625  188656 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:31.725466  188656 cri.go:89] found id: ""
	I0731 20:59:31.725548  188656 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 20:59:31.737025  188656 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 20:59:31.737050  188656 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 20:59:31.737113  188656 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 20:59:31.747325  188656 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:59:31.748325  188656 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-239115" does not appear in /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 20:59:31.748965  188656 kubeconfig.go:62] /home/jenkins/minikube-integration/19355-121704/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-239115" cluster setting kubeconfig missing "old-k8s-version-239115" context setting]
	I0731 20:59:31.749997  188656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:31.757569  188656 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 20:59:31.771188  188656 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.51
	I0731 20:59:31.771222  188656 kubeadm.go:1160] stopping kube-system containers ...
	I0731 20:59:31.771236  188656 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 20:59:31.771292  188656 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:31.811574  188656 cri.go:89] found id: ""
	I0731 20:59:31.811653  188656 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 20:59:31.829930  188656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:59:31.840145  188656 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:59:31.840165  188656 kubeadm.go:157] found existing configuration files:
	
	I0731 20:59:31.840206  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 20:59:31.851266  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:59:31.851340  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:59:31.861634  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 20:59:31.871532  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:59:31.871605  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:59:31.882164  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 20:59:31.892222  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:59:31.892291  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:59:31.903299  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 20:59:31.916163  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:59:31.916235  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:59:31.929423  188656 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 20:59:31.942668  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:32.107220  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:32.953249  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:33.207806  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:33.307640  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:33.410338  188656 api_server.go:52] waiting for apiserver process to appear ...
	I0731 20:59:33.410444  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:31.221009  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:33.589275  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:32.612024  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:35.109601  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:32.713632  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:32.714137  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:32.714169  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:32.714064  189758 retry.go:31] will retry after 2.013485748s: waiting for machine to come up
	I0731 20:59:34.729625  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:34.730006  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:34.730070  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:34.729970  189758 retry.go:31] will retry after 2.193072749s: waiting for machine to come up
	I0731 20:59:36.924345  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:36.924990  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:36.925008  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:36.924940  189758 retry.go:31] will retry after 3.394781674s: waiting for machine to come up
	I0731 20:59:33.910958  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:34.411011  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:34.911110  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:35.410715  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:35.911117  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:36.410825  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:36.911311  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:37.410757  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:37.910786  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:38.410821  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:36.089622  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:38.589435  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:37.110446  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:39.111323  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:40.322463  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:40.322827  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:40.322857  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:40.322774  189758 retry.go:31] will retry after 3.836613891s: waiting for machine to come up
	I0731 20:59:38.910891  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:39.411547  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:39.911260  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:40.411404  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:40.910719  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:41.411449  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:41.910643  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:42.410967  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:42.910703  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:43.411187  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:41.088768  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:43.589256  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:41.609891  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:44.111379  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:44.160516  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.161009  187862 main.go:141] libmachine: (embed-certs-831240) Found IP for machine: 192.168.39.92
	I0731 20:59:44.161029  187862 main.go:141] libmachine: (embed-certs-831240) Reserving static IP address...
	I0731 20:59:44.161041  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has current primary IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.161561  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "embed-certs-831240", mac: "52:54:00:ff:69:a6", ip: "192.168.39.92"} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.161594  187862 main.go:141] libmachine: (embed-certs-831240) DBG | skip adding static IP to network mk-embed-certs-831240 - found existing host DHCP lease matching {name: "embed-certs-831240", mac: "52:54:00:ff:69:a6", ip: "192.168.39.92"}
	I0731 20:59:44.161609  187862 main.go:141] libmachine: (embed-certs-831240) Reserved static IP address: 192.168.39.92
	I0731 20:59:44.161623  187862 main.go:141] libmachine: (embed-certs-831240) Waiting for SSH to be available...
	I0731 20:59:44.161638  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Getting to WaitForSSH function...
	I0731 20:59:44.163936  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.164285  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.164318  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.164447  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Using SSH client type: external
	I0731 20:59:44.164479  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa (-rw-------)
	I0731 20:59:44.164499  187862 main.go:141] libmachine: (embed-certs-831240) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.92 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:59:44.164510  187862 main.go:141] libmachine: (embed-certs-831240) DBG | About to run SSH command:
	I0731 20:59:44.164544  187862 main.go:141] libmachine: (embed-certs-831240) DBG | exit 0
	I0731 20:59:44.293463  187862 main.go:141] libmachine: (embed-certs-831240) DBG | SSH cmd err, output: <nil>: 
	I0731 20:59:44.293819  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetConfigRaw
	I0731 20:59:44.294490  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetIP
	I0731 20:59:44.296982  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.297351  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.297381  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.297634  187862 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/config.json ...
	I0731 20:59:44.297877  187862 machine.go:94] provisionDockerMachine start ...
	I0731 20:59:44.297897  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:44.298116  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.300452  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.300806  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.300829  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.300953  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:44.301146  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.301308  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.301439  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:44.301634  187862 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:44.301811  187862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0731 20:59:44.301823  187862 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 20:59:44.418065  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 20:59:44.418105  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetMachineName
	I0731 20:59:44.418428  187862 buildroot.go:166] provisioning hostname "embed-certs-831240"
	I0731 20:59:44.418446  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetMachineName
	I0731 20:59:44.418666  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.421984  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.422403  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.422434  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.422568  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:44.422733  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.422893  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.423023  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:44.423208  187862 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:44.423371  187862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0731 20:59:44.423410  187862 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-831240 && echo "embed-certs-831240" | sudo tee /etc/hostname
	I0731 20:59:44.549670  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-831240
	
	I0731 20:59:44.549697  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.552503  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.552851  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.552876  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.553017  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:44.553200  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.553398  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.553533  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:44.553721  187862 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:44.554012  187862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0731 20:59:44.554039  187862 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-831240' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-831240/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-831240' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:59:44.674662  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:59:44.674693  187862 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 20:59:44.674713  187862 buildroot.go:174] setting up certificates
	I0731 20:59:44.674723  187862 provision.go:84] configureAuth start
	I0731 20:59:44.674733  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetMachineName
	I0731 20:59:44.675011  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetIP
	I0731 20:59:44.677631  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.677911  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.677951  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.678081  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.679869  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.680177  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.680205  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.680332  187862 provision.go:143] copyHostCerts
	I0731 20:59:44.680391  187862 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 20:59:44.680401  187862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 20:59:44.680450  187862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 20:59:44.680537  187862 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 20:59:44.680545  187862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 20:59:44.680564  187862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 20:59:44.680628  187862 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 20:59:44.680635  187862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 20:59:44.680652  187862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 20:59:44.680711  187862 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.embed-certs-831240 san=[127.0.0.1 192.168.39.92 embed-certs-831240 localhost minikube]
	I0731 20:59:44.733872  187862 provision.go:177] copyRemoteCerts
	I0731 20:59:44.733927  187862 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:59:44.733951  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.736399  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.736731  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.736758  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.736935  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:44.737131  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.737273  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:44.737430  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 20:59:44.824050  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:59:44.847699  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0731 20:59:44.872138  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 20:59:44.896013  187862 provision.go:87] duration metric: took 221.275458ms to configureAuth
	I0731 20:59:44.896042  187862 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:59:44.896234  187862 config.go:182] Loaded profile config "embed-certs-831240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:59:44.896327  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.898820  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.899206  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.899232  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.899457  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:44.899660  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.899822  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.899993  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:44.900216  187862 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:44.900438  187862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0731 20:59:44.900462  187862 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:59:45.179165  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:59:45.179194  187862 machine.go:97] duration metric: took 881.302407ms to provisionDockerMachine
	I0731 20:59:45.179213  187862 start.go:293] postStartSetup for "embed-certs-831240" (driver="kvm2")
	I0731 20:59:45.179226  187862 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:59:45.179252  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:45.179615  187862 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:59:45.179646  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:45.182617  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.183047  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:45.183069  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.183284  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:45.183510  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:45.183654  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:45.183805  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 20:59:45.273492  187862 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:59:45.277593  187862 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:59:45.277618  187862 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 20:59:45.277687  187862 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 20:59:45.277782  187862 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 20:59:45.277889  187862 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:59:45.288172  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:45.311763  187862 start.go:296] duration metric: took 132.534326ms for postStartSetup
	I0731 20:59:45.311803  187862 fix.go:56] duration metric: took 22.228928797s for fixHost
	I0731 20:59:45.311827  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:45.314578  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.314962  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:45.314998  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.315146  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:45.315381  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:45.315549  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:45.315681  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:45.315868  187862 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:45.316035  187862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0731 20:59:45.316045  187862 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:59:45.426289  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722459585.381297707
	
	I0731 20:59:45.426314  187862 fix.go:216] guest clock: 1722459585.381297707
	I0731 20:59:45.426324  187862 fix.go:229] Guest: 2024-07-31 20:59:45.381297707 +0000 UTC Remote: 2024-07-31 20:59:45.311808006 +0000 UTC m=+363.090091892 (delta=69.489701ms)
	I0731 20:59:45.426379  187862 fix.go:200] guest clock delta is within tolerance: 69.489701ms
	I0731 20:59:45.426387  187862 start.go:83] releasing machines lock for "embed-certs-831240", held for 22.343543995s
	I0731 20:59:45.426419  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:45.426684  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetIP
	I0731 20:59:45.429330  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.429757  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:45.429785  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.429952  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:45.430453  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:45.430671  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:45.430790  187862 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:59:45.430854  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:45.430905  187862 ssh_runner.go:195] Run: cat /version.json
	I0731 20:59:45.430943  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:45.433850  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.434108  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.434192  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:45.434222  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.434385  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:45.434580  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:45.434584  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:45.434611  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.434760  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:45.434768  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:45.434939  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:45.434929  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 20:59:45.435099  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:45.435243  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 20:59:45.542122  187862 ssh_runner.go:195] Run: systemctl --version
	I0731 20:59:45.548583  187862 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:59:45.690235  187862 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:59:45.696897  187862 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:59:45.696986  187862 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:59:45.714456  187862 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:59:45.714480  187862 start.go:495] detecting cgroup driver to use...
	I0731 20:59:45.714546  187862 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:59:45.732184  187862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:59:45.747047  187862 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:59:45.747104  187862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:59:45.761152  187862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:59:45.775267  187862 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:59:45.890891  187862 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:59:46.043503  187862 docker.go:233] disabling docker service ...
	I0731 20:59:46.043577  187862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:59:46.058174  187862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:59:46.070900  187862 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:59:46.209527  187862 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:59:46.343868  187862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:59:46.357583  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:59:46.375819  187862 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 20:59:46.375875  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.386762  187862 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:59:46.386844  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.397495  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.407654  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.418326  187862 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:59:46.428983  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.439530  187862 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.457956  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.468003  187862 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:59:46.477332  187862 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:59:46.477400  187862 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:59:46.490886  187862 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:59:46.500516  187862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:46.617952  187862 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:59:46.761978  187862 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:59:46.762088  187862 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:59:46.767210  187862 start.go:563] Will wait 60s for crictl version
	I0731 20:59:46.767275  187862 ssh_runner.go:195] Run: which crictl
	I0731 20:59:46.771502  187862 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:59:46.810894  187862 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:59:46.810976  187862 ssh_runner.go:195] Run: crio --version
	I0731 20:59:46.839234  187862 ssh_runner.go:195] Run: crio --version
	I0731 20:59:46.871209  187862 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 20:59:46.872648  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetIP
	I0731 20:59:46.875374  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:46.875683  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:46.875698  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:46.875900  187862 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 20:59:46.880402  187862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:46.894098  187862 kubeadm.go:883] updating cluster {Name:embed-certs-831240 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-831240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:59:46.894238  187862 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:59:46.894300  187862 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:46.937003  187862 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 20:59:46.937079  187862 ssh_runner.go:195] Run: which lz4
	I0731 20:59:46.941158  187862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 20:59:46.945395  187862 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 20:59:46.945425  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 20:59:43.910997  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:44.410783  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:44.911365  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:45.410690  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:45.911150  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:46.411384  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:46.910579  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:47.411171  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:47.910578  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:48.411377  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:45.589690  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:47.591464  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:46.608955  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:48.611634  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:50.615557  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:48.414703  187862 crio.go:462] duration metric: took 1.473569222s to copy over tarball
	I0731 20:59:48.414789  187862 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 20:59:50.666750  187862 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.251926888s)
	I0731 20:59:50.666783  187862 crio.go:469] duration metric: took 2.252043688s to extract the tarball
	I0731 20:59:50.666793  187862 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 20:59:50.707188  187862 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:50.749781  187862 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 20:59:50.749808  187862 cache_images.go:84] Images are preloaded, skipping loading
	I0731 20:59:50.749817  187862 kubeadm.go:934] updating node { 192.168.39.92 8443 v1.30.3 crio true true} ...
	I0731 20:59:50.749923  187862 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-831240 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-831240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:59:50.749998  187862 ssh_runner.go:195] Run: crio config
	I0731 20:59:50.797191  187862 cni.go:84] Creating CNI manager for ""
	I0731 20:59:50.797214  187862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:50.797227  187862 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:59:50.797253  187862 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.92 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-831240 NodeName:embed-certs-831240 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.92"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.92 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 20:59:50.797484  187862 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.92
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-831240"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.92
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.92"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:59:50.797556  187862 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 20:59:50.808170  187862 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:59:50.808236  187862 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 20:59:50.817847  187862 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0731 20:59:50.834107  187862 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:59:50.849722  187862 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0731 20:59:50.866599  187862 ssh_runner.go:195] Run: grep 192.168.39.92	control-plane.minikube.internal$ /etc/hosts
	I0731 20:59:50.870727  187862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.92	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:50.884490  187862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:51.043488  187862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:59:51.064792  187862 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240 for IP: 192.168.39.92
	I0731 20:59:51.064816  187862 certs.go:194] generating shared ca certs ...
	I0731 20:59:51.064836  187862 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:51.065142  187862 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 20:59:51.065225  187862 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 20:59:51.065254  187862 certs.go:256] generating profile certs ...
	I0731 20:59:51.065443  187862 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/client.key
	I0731 20:59:51.065571  187862 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/apiserver.key.4e545c52
	I0731 20:59:51.065639  187862 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/proxy-client.key
	I0731 20:59:51.065798  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 20:59:51.065846  187862 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 20:59:51.065857  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 20:59:51.065883  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:59:51.065909  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:59:51.065929  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 20:59:51.065971  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:51.066633  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:59:51.107287  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:59:51.138745  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:59:51.176139  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 20:59:51.211344  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0731 20:59:51.241050  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 20:59:51.269307  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:59:51.293184  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 20:59:51.316745  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:59:51.343620  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 20:59:51.367293  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 20:59:51.391789  187862 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:59:51.413821  187862 ssh_runner.go:195] Run: openssl version
	I0731 20:59:51.420455  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 20:59:51.431721  187862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 20:59:51.436672  187862 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 20:59:51.436724  187862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 20:59:51.442604  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 20:59:51.453601  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 20:59:51.464109  187862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 20:59:51.468598  187862 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 20:59:51.468648  187862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 20:59:51.474333  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:59:51.484758  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:59:51.495093  187862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:51.499557  187862 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:51.499605  187862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:51.505244  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:59:51.515545  187862 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:59:51.519923  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 20:59:51.525696  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 20:59:51.531430  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 20:59:51.537082  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 20:59:51.542713  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 20:59:51.548206  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 20:59:51.553705  187862 kubeadm.go:392] StartCluster: {Name:embed-certs-831240 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-831240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:59:51.553793  187862 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:59:51.553841  187862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:51.592396  187862 cri.go:89] found id: ""
	I0731 20:59:51.592472  187862 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 20:59:51.602510  187862 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 20:59:51.602528  187862 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 20:59:51.602578  187862 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 20:59:51.612384  187862 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:59:51.613530  187862 kubeconfig.go:125] found "embed-certs-831240" server: "https://192.168.39.92:8443"
	I0731 20:59:51.615991  187862 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 20:59:51.625205  187862 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.92
	I0731 20:59:51.625239  187862 kubeadm.go:1160] stopping kube-system containers ...
	I0731 20:59:51.625253  187862 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 20:59:51.625307  187862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:51.663278  187862 cri.go:89] found id: ""
	I0731 20:59:51.663370  187862 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 20:59:51.678876  187862 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:59:51.688071  187862 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:59:51.688092  187862 kubeadm.go:157] found existing configuration files:
	
	I0731 20:59:51.688139  187862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 20:59:51.696441  187862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:59:51.696494  187862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:59:51.705310  187862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 20:59:51.713545  187862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:59:51.713599  187862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:59:51.723512  187862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 20:59:51.732304  187862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:59:51.732380  187862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:59:51.741301  187862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 20:59:51.749537  187862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:59:51.749583  187862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:59:51.758609  187862 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 20:59:51.774450  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:51.888916  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:48.910784  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:49.411137  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:49.911453  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:50.411128  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:50.911431  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:51.410483  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:51.910975  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:52.411519  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:52.911079  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:53.410802  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:50.094603  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:52.589951  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:53.424691  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:55.609675  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:52.666705  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:52.899759  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:52.975806  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:53.050422  187862 api_server.go:52] waiting for apiserver process to appear ...
	I0731 20:59:53.050493  187862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:53.551073  187862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:54.051427  187862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:54.551268  187862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:54.570361  187862 api_server.go:72] duration metric: took 1.519937245s to wait for apiserver process to appear ...
	I0731 20:59:54.570389  187862 api_server.go:88] waiting for apiserver healthz status ...
	I0731 20:59:54.570414  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:53.911405  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:54.410870  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:54.911330  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:55.411491  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:55.911380  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:56.411483  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:56.910602  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:57.411228  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:57.910486  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:58.411198  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:57.260421  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:57.260455  187862 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:57.260469  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:57.284265  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:57.284301  187862 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:57.570976  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:57.575616  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:57.575644  187862 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:58.071247  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:58.075871  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:58.075903  187862 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:58.570906  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:58.581990  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:58.582038  187862 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:59.070528  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:59.074787  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 200:
	ok
	I0731 20:59:59.081502  187862 api_server.go:141] control plane version: v1.30.3
	I0731 20:59:59.081541  187862 api_server.go:131] duration metric: took 4.511132973s to wait for apiserver health ...
	I0731 20:59:59.081552  187862 cni.go:84] Creating CNI manager for ""
	I0731 20:59:59.081561  187862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:59.083504  187862 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 20:59:55.089279  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:57.589380  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:59.084894  187862 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 20:59:59.098139  187862 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 20:59:59.118458  187862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 20:59:59.128022  187862 system_pods.go:59] 8 kube-system pods found
	I0731 20:59:59.128061  187862 system_pods.go:61] "coredns-7db6d8ff4d-2ks55" [f5ad9d76-5cdc-430e-8933-7e72a2dda95f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 20:59:59.128071  187862 system_pods.go:61] "etcd-embed-certs-831240" [5236ad06-90d9-48f1-964a-efa8f56ee8b5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 20:59:59.128082  187862 system_pods.go:61] "kube-apiserver-embed-certs-831240" [06290f48-0a7b-4d88-9e61-af4c7dde9acd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 20:59:59.128100  187862 system_pods.go:61] "kube-controller-manager-embed-certs-831240" [5d669fab-16cd-4bc6-a880-a1ecfb5d55b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 20:59:59.128113  187862 system_pods.go:61] "kube-proxy-x662j" [9ad0d8a8-94b4-4f3e-b5da-4e5585c28d21] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 20:59:59.128121  187862 system_pods.go:61] "kube-scheduler-embed-certs-831240" [cf9c5922-6468-46e7-84c1-1841f5bc3446] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 20:59:59.128134  187862 system_pods.go:61] "metrics-server-569cc877fc-slbkm" [f93f674b-1f0e-443b-ac06-9c2a5234eeea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 20:59:59.128145  187862 system_pods.go:61] "storage-provisioner" [d3d5fa24-96e8-4ab5-9887-62ff8b82f21d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 20:59:59.128156  187862 system_pods.go:74] duration metric: took 9.673815ms to wait for pod list to return data ...
	I0731 20:59:59.128168  187862 node_conditions.go:102] verifying NodePressure condition ...
	I0731 20:59:59.131825  187862 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 20:59:59.131853  187862 node_conditions.go:123] node cpu capacity is 2
	I0731 20:59:59.131865  187862 node_conditions.go:105] duration metric: took 3.691724ms to run NodePressure ...
	I0731 20:59:59.131897  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:59.494923  187862 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 20:59:59.501848  187862 kubeadm.go:739] kubelet initialised
	I0731 20:59:59.501875  187862 kubeadm.go:740] duration metric: took 6.920816ms waiting for restarted kubelet to initialise ...
	I0731 20:59:59.501885  187862 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:59:59.510503  187862 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:59.518204  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.518234  187862 pod_ready.go:81] duration metric: took 7.702873ms for pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:59.518247  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.518263  187862 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:59.523236  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "etcd-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.523258  187862 pod_ready.go:81] duration metric: took 4.985299ms for pod "etcd-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:59.523266  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "etcd-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.523275  187862 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:59.535237  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.535256  187862 pod_ready.go:81] duration metric: took 11.97449ms for pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:59.535270  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.535275  187862 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:59.541512  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.541531  187862 pod_ready.go:81] duration metric: took 6.24797ms for pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:59.541539  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.541545  187862 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-x662j" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:59.922722  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "kube-proxy-x662j" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.922757  187862 pod_ready.go:81] duration metric: took 381.203526ms for pod "kube-proxy-x662j" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:59.922771  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "kube-proxy-x662j" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.922779  187862 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:00.322049  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:00.322077  187862 pod_ready.go:81] duration metric: took 399.289505ms for pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	E0731 21:00:00.322088  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:00.322094  187862 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:00.722961  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:00.722993  187862 pod_ready.go:81] duration metric: took 400.88956ms for pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace to be "Ready" ...
	E0731 21:00:00.723008  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:00.723017  187862 pod_ready.go:38] duration metric: took 1.221112347s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:00:00.723050  187862 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:00:00.735642  187862 ops.go:34] apiserver oom_adj: -16
	I0731 21:00:00.735697  187862 kubeadm.go:597] duration metric: took 9.133136671s to restartPrimaryControlPlane
	I0731 21:00:00.735735  187862 kubeadm.go:394] duration metric: took 9.182030801s to StartCluster
	I0731 21:00:00.735764  187862 settings.go:142] acquiring lock: {Name:mk1f43113b3e416d694056e4890ac819b020378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:00:00.735860  187862 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 21:00:00.737955  187862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:00:00.738247  187862 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:00:00.738329  187862 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:00:00.738418  187862 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-831240"
	I0731 21:00:00.738432  187862 addons.go:69] Setting default-storageclass=true in profile "embed-certs-831240"
	I0731 21:00:00.738463  187862 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-831240"
	W0731 21:00:00.738475  187862 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:00:00.738481  187862 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-831240"
	I0731 21:00:00.738513  187862 host.go:66] Checking if "embed-certs-831240" exists ...
	I0731 21:00:00.738547  187862 config.go:182] Loaded profile config "embed-certs-831240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:00:00.738581  187862 addons.go:69] Setting metrics-server=true in profile "embed-certs-831240"
	I0731 21:00:00.738651  187862 addons.go:234] Setting addon metrics-server=true in "embed-certs-831240"
	W0731 21:00:00.738666  187862 addons.go:243] addon metrics-server should already be in state true
	I0731 21:00:00.738735  187862 host.go:66] Checking if "embed-certs-831240" exists ...
	I0731 21:00:00.738818  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.738858  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.738897  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.738960  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.739144  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.739190  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.740244  187862 out.go:177] * Verifying Kubernetes components...
	I0731 21:00:00.746003  187862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:00:00.755735  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35867
	I0731 21:00:00.755773  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38437
	I0731 21:00:00.756268  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.756271  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.756594  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35939
	I0731 21:00:00.756820  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.756847  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.756892  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.756917  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.757069  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.757228  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.757254  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.757458  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetState
	I0731 21:00:00.757638  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.757668  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.757745  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.757774  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.758005  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.758543  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.758586  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.761553  187862 addons.go:234] Setting addon default-storageclass=true in "embed-certs-831240"
	W0731 21:00:00.761587  187862 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:00:00.761618  187862 host.go:66] Checking if "embed-certs-831240" exists ...
	I0731 21:00:00.762018  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.762070  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.775492  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42385
	I0731 21:00:00.776091  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.776712  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.776743  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.776760  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35295
	I0731 21:00:00.777245  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.777402  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.777513  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetState
	I0731 21:00:00.777920  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.777945  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.778185  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32907
	I0731 21:00:00.778393  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.778603  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetState
	I0731 21:00:00.778687  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.779223  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.779243  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.779665  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.779718  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 21:00:00.780231  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.780274  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.780612  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 21:00:00.781947  187862 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:00:00.782994  187862 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 20:59:58.110503  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:00.112109  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:00.784194  187862 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:00:00.784216  187862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:00:00.784240  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 21:00:00.784937  187862 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:00:00.784958  187862 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:00:00.784984  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 21:00:00.788544  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.788947  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 21:00:00.788970  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.789003  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.789127  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 21:00:00.789389  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 21:00:00.789521  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 21:00:00.789548  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.789571  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 21:00:00.789773  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 21:00:00.790126  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 21:00:00.790324  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 21:00:00.790502  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 21:00:00.790663  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 21:00:00.799024  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35647
	I0731 21:00:00.799718  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.800341  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.800360  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.800967  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.801258  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetState
	I0731 21:00:00.803078  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 21:00:00.803555  187862 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:00:00.803571  187862 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:00:00.803591  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 21:00:00.809363  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 21:00:00.809461  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.809492  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 21:00:00.809512  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.809680  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 21:00:00.809858  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 21:00:00.810032  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 21:00:00.933963  187862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:00:00.953572  187862 node_ready.go:35] waiting up to 6m0s for node "embed-certs-831240" to be "Ready" ...
	I0731 21:00:01.036486  187862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:00:01.040636  187862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:00:01.040658  187862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 21:00:01.063384  187862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:00:01.068645  187862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:00:01.068675  187862 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:00:01.090838  187862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:00:01.090861  187862 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:00:01.113173  187862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:00:02.099966  187862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.063427097s)
	I0731 21:00:02.100021  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.100035  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.100080  187862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.036657274s)
	I0731 21:00:02.100129  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.100146  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.100338  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.100441  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.100452  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.100461  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.100580  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.100605  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.100615  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.100623  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.100698  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.100709  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Closing plugin on server side
	I0731 21:00:02.100723  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.100866  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.100875  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Closing plugin on server side
	I0731 21:00:02.100882  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.107654  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.107688  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.107952  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.107968  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.108003  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Closing plugin on server side
	I0731 21:00:02.140031  187862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.026799248s)
	I0731 21:00:02.140100  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.140116  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.140424  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Closing plugin on server side
	I0731 21:00:02.140455  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.140470  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.140482  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.140494  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.140772  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Closing plugin on server side
	I0731 21:00:02.140800  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.140808  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.140817  187862 addons.go:475] Verifying addon metrics-server=true in "embed-certs-831240"
	I0731 21:00:02.142583  187862 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 21:00:02.143787  187862 addons.go:510] duration metric: took 1.405477731s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 20:59:58.910774  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:59.410697  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:59.911233  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:00.411170  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:00.911416  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:01.410979  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:01.911444  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:02.411537  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:02.911216  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:03.411386  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:00.089186  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:02.588315  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:02.610109  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:04.610324  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:02.958162  187862 node_ready.go:53] node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:05.458997  187862 node_ready.go:53] node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:03.910942  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:04.411505  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:04.911485  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:05.410763  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:05.910937  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:06.411216  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:06.910743  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:07.410941  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:07.910922  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:08.410593  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:04.589597  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:07.089475  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:09.090023  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:06.610390  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:09.110758  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:07.958154  187862 node_ready.go:49] node "embed-certs-831240" has status "Ready":"True"
	I0731 21:00:07.958180  187862 node_ready.go:38] duration metric: took 7.004576791s for node "embed-certs-831240" to be "Ready" ...
	I0731 21:00:07.958191  187862 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:00:07.969639  187862 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:07.974704  187862 pod_ready.go:92] pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:07.974733  187862 pod_ready.go:81] duration metric: took 5.064645ms for pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:07.974745  187862 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:09.980566  187862 pod_ready.go:102] pod "etcd-embed-certs-831240" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:10.480476  187862 pod_ready.go:92] pod "etcd-embed-certs-831240" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:10.480501  187862 pod_ready.go:81] duration metric: took 2.505748029s for pod "etcd-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:10.480511  187862 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:10.485850  187862 pod_ready.go:92] pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:10.485873  187862 pod_ready.go:81] duration metric: took 5.353478ms for pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:10.485883  187862 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:08.910788  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:09.410807  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:09.911286  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:10.411372  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:10.910748  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:11.411253  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:11.910807  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:12.411208  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:12.910887  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:13.411318  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:11.589454  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:14.090483  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:11.610842  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:14.110306  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:12.492346  187862 pod_ready.go:102] pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:13.991859  187862 pod_ready.go:92] pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:13.991884  187862 pod_ready.go:81] duration metric: took 3.505993775s for pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:13.991893  187862 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x662j" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:13.997932  187862 pod_ready.go:92] pod "kube-proxy-x662j" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:13.997961  187862 pod_ready.go:81] duration metric: took 6.060225ms for pod "kube-proxy-x662j" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:13.997974  187862 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:14.007155  187862 pod_ready.go:92] pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:14.007178  187862 pod_ready.go:81] duration metric: took 9.197289ms for pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:14.007187  187862 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:16.013417  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:13.910943  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:14.410728  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:14.911343  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:15.410545  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:15.910560  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:16.411117  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:16.910537  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:17.410761  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:17.910796  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:18.411138  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:16.589010  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:18.589215  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:16.609886  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:18.610209  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:20.611613  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:18.013504  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:20.513116  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:18.911394  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:19.411098  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:19.910629  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:20.410698  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:20.910760  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:21.410503  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:21.910582  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:22.410724  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:22.910792  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:23.410961  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:21.089938  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:23.588082  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:23.109996  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:25.110361  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:22.514254  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:24.514729  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:27.013263  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:23.910510  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:24.410725  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:24.910807  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:25.411543  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:25.911473  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:26.410494  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:26.910519  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:27.410950  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:27.911528  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:28.411350  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:25.589873  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:27.590134  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:27.612311  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:30.110116  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:29.014386  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:31.014534  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:28.911371  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:29.411269  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:29.911465  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:30.410633  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:30.911166  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:31.411184  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:31.910806  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:32.410806  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:32.911125  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:33.410942  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:33.411021  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:33.461204  188656 cri.go:89] found id: ""
	I0731 21:00:33.461232  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.461241  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:33.461249  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:33.461313  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:33.500898  188656 cri.go:89] found id: ""
	I0731 21:00:33.500927  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.500937  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:33.500944  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:33.501010  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:33.536865  188656 cri.go:89] found id: ""
	I0731 21:00:33.536889  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.536902  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:33.536908  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:33.536957  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:33.578540  188656 cri.go:89] found id: ""
	I0731 21:00:33.578570  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.578582  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:33.578595  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:33.578686  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:33.616242  188656 cri.go:89] found id: ""
	I0731 21:00:33.616266  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.616276  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:33.616283  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:33.616345  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:33.650436  188656 cri.go:89] found id: ""
	I0731 21:00:33.650468  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.650479  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:33.650487  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:33.650552  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:33.687256  188656 cri.go:89] found id: ""
	I0731 21:00:33.687288  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.687300  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:33.687308  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:33.687365  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:33.720381  188656 cri.go:89] found id: ""
	I0731 21:00:33.720428  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.720440  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:33.720453  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:33.720469  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:33.772182  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:33.772226  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:33.787323  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:33.787359  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:00:30.089778  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:32.587877  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:32.110769  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:34.610418  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:33.514142  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:36.013676  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	W0731 21:00:33.907858  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:33.907878  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:33.907892  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:33.974118  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:33.974157  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:36.513427  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:36.527531  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:36.527588  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:36.567679  188656 cri.go:89] found id: ""
	I0731 21:00:36.567706  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.567714  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:36.567726  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:36.567786  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:36.608106  188656 cri.go:89] found id: ""
	I0731 21:00:36.608134  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.608145  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:36.608153  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:36.608215  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:36.651783  188656 cri.go:89] found id: ""
	I0731 21:00:36.651815  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.651824  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:36.651830  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:36.651892  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:36.686716  188656 cri.go:89] found id: ""
	I0731 21:00:36.686743  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.686751  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:36.686758  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:36.686823  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:36.721823  188656 cri.go:89] found id: ""
	I0731 21:00:36.721857  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.721865  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:36.721871  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:36.721939  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:36.758060  188656 cri.go:89] found id: ""
	I0731 21:00:36.758093  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.758103  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:36.758112  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:36.758173  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:36.801667  188656 cri.go:89] found id: ""
	I0731 21:00:36.801694  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.801704  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:36.801712  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:36.801776  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:36.845084  188656 cri.go:89] found id: ""
	I0731 21:00:36.845113  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.845124  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:36.845137  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:36.845152  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:36.897208  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:36.897248  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:36.910716  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:36.910750  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:36.987259  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:36.987285  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:36.987304  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:37.061109  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:37.061144  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:34.589416  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:36.592841  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:39.088346  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:36.611386  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:39.111149  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:38.516701  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:41.017409  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:39.600847  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:39.615897  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:39.615957  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:39.655390  188656 cri.go:89] found id: ""
	I0731 21:00:39.655417  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.655424  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:39.655430  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:39.655502  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:39.694180  188656 cri.go:89] found id: ""
	I0731 21:00:39.694213  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.694224  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:39.694231  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:39.694300  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:39.736752  188656 cri.go:89] found id: ""
	I0731 21:00:39.736783  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.736793  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:39.736801  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:39.736860  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:39.775685  188656 cri.go:89] found id: ""
	I0731 21:00:39.775770  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.775790  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:39.775802  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:39.775871  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:39.816790  188656 cri.go:89] found id: ""
	I0731 21:00:39.816820  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.816829  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:39.816835  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:39.816886  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:39.854931  188656 cri.go:89] found id: ""
	I0731 21:00:39.854963  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.854973  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:39.854981  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:39.855045  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:39.891039  188656 cri.go:89] found id: ""
	I0731 21:00:39.891066  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.891074  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:39.891083  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:39.891136  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:39.927434  188656 cri.go:89] found id: ""
	I0731 21:00:39.927463  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.927473  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:39.927483  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:39.927496  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:39.941240  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:39.941272  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:40.017212  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:40.017233  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:40.017246  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:40.094047  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:40.094081  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:40.138940  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:40.138966  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:42.690818  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:42.704855  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:42.704931  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:42.752315  188656 cri.go:89] found id: ""
	I0731 21:00:42.752347  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.752368  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:42.752376  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:42.752445  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:42.790060  188656 cri.go:89] found id: ""
	I0731 21:00:42.790090  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.790101  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:42.790109  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:42.790220  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:42.825504  188656 cri.go:89] found id: ""
	I0731 21:00:42.825532  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.825540  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:42.825547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:42.825598  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:42.860157  188656 cri.go:89] found id: ""
	I0731 21:00:42.860193  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.860204  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:42.860213  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:42.860286  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:42.902914  188656 cri.go:89] found id: ""
	I0731 21:00:42.902947  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.902959  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:42.902967  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:42.903036  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:42.950503  188656 cri.go:89] found id: ""
	I0731 21:00:42.950532  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.950541  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:42.950550  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:42.950603  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:43.010232  188656 cri.go:89] found id: ""
	I0731 21:00:43.010261  188656 logs.go:276] 0 containers: []
	W0731 21:00:43.010272  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:43.010280  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:43.010344  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:43.045487  188656 cri.go:89] found id: ""
	I0731 21:00:43.045517  188656 logs.go:276] 0 containers: []
	W0731 21:00:43.045527  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:43.045539  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:43.045556  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:43.123248  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:43.123279  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:43.123296  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:43.212230  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:43.212272  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:43.254595  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:43.254626  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:43.306187  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:43.306227  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:41.589806  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:44.088126  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:41.611786  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:44.109436  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:43.513500  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:45.514161  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:45.820246  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:45.835707  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:45.835786  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:45.872079  188656 cri.go:89] found id: ""
	I0731 21:00:45.872110  188656 logs.go:276] 0 containers: []
	W0731 21:00:45.872122  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:45.872130  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:45.872196  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:45.910637  188656 cri.go:89] found id: ""
	I0731 21:00:45.910664  188656 logs.go:276] 0 containers: []
	W0731 21:00:45.910672  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:45.910678  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:45.910740  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:45.945316  188656 cri.go:89] found id: ""
	I0731 21:00:45.945360  188656 logs.go:276] 0 containers: []
	W0731 21:00:45.945372  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:45.945380  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:45.945455  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:45.982015  188656 cri.go:89] found id: ""
	I0731 21:00:45.982046  188656 logs.go:276] 0 containers: []
	W0731 21:00:45.982057  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:45.982096  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:45.982165  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:46.017359  188656 cri.go:89] found id: ""
	I0731 21:00:46.017392  188656 logs.go:276] 0 containers: []
	W0731 21:00:46.017404  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:46.017412  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:46.017478  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:46.054401  188656 cri.go:89] found id: ""
	I0731 21:00:46.054431  188656 logs.go:276] 0 containers: []
	W0731 21:00:46.054447  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:46.054454  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:46.054507  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:46.092107  188656 cri.go:89] found id: ""
	I0731 21:00:46.092130  188656 logs.go:276] 0 containers: []
	W0731 21:00:46.092137  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:46.092143  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:46.092190  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:46.128613  188656 cri.go:89] found id: ""
	I0731 21:00:46.128642  188656 logs.go:276] 0 containers: []
	W0731 21:00:46.128652  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:46.128665  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:46.128679  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:46.144539  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:46.144570  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:46.219399  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:46.219433  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:46.219448  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:46.304486  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:46.304529  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:46.344087  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:46.344121  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:46.090543  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:48.090607  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:46.111072  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:48.610316  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:50.611553  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:48.014287  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:50.513252  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:48.894728  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:48.916610  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:48.916675  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:48.978515  188656 cri.go:89] found id: ""
	I0731 21:00:48.978543  188656 logs.go:276] 0 containers: []
	W0731 21:00:48.978550  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:48.978557  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:48.978615  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:49.026224  188656 cri.go:89] found id: ""
	I0731 21:00:49.026257  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.026268  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:49.026276  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:49.026354  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:49.064967  188656 cri.go:89] found id: ""
	I0731 21:00:49.064994  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.065003  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:49.065010  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:49.065070  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:49.101966  188656 cri.go:89] found id: ""
	I0731 21:00:49.101990  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.101999  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:49.102004  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:49.102056  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:49.137775  188656 cri.go:89] found id: ""
	I0731 21:00:49.137801  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.137809  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:49.137815  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:49.137867  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:49.173778  188656 cri.go:89] found id: ""
	I0731 21:00:49.173824  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.173832  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:49.173839  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:49.173908  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:49.207211  188656 cri.go:89] found id: ""
	I0731 21:00:49.207239  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.207247  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:49.207254  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:49.207333  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:49.244126  188656 cri.go:89] found id: ""
	I0731 21:00:49.244159  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.244180  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:49.244202  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:49.244221  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:49.299606  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:49.299646  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:49.314093  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:49.314121  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:49.384691  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:49.384712  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:49.384728  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:49.464425  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:49.464462  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:52.005670  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:52.019617  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:52.019705  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:52.053452  188656 cri.go:89] found id: ""
	I0731 21:00:52.053485  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.053494  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:52.053500  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:52.053552  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:52.094462  188656 cri.go:89] found id: ""
	I0731 21:00:52.094495  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.094504  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:52.094510  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:52.094572  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:52.134555  188656 cri.go:89] found id: ""
	I0731 21:00:52.134584  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.134595  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:52.134602  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:52.134676  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:52.168805  188656 cri.go:89] found id: ""
	I0731 21:00:52.168851  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.168863  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:52.168871  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:52.168939  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:52.203093  188656 cri.go:89] found id: ""
	I0731 21:00:52.203121  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.203132  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:52.203140  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:52.203213  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:52.237816  188656 cri.go:89] found id: ""
	I0731 21:00:52.237842  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.237850  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:52.237857  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:52.237906  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:52.272136  188656 cri.go:89] found id: ""
	I0731 21:00:52.272175  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.272194  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:52.272202  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:52.272261  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:52.306616  188656 cri.go:89] found id: ""
	I0731 21:00:52.306641  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.306649  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:52.306659  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:52.306671  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:52.372668  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:52.372690  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:52.372707  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:52.457752  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:52.457794  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:52.496087  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:52.496129  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:52.548137  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:52.548176  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:50.588204  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:53.089737  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:53.110034  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:55.110293  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:52.514848  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:55.013623  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:57.015221  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:55.063463  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:55.076922  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:55.077005  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:55.117479  188656 cri.go:89] found id: ""
	I0731 21:00:55.117511  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.117523  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:55.117531  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:55.117595  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:55.156311  188656 cri.go:89] found id: ""
	I0731 21:00:55.156339  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.156348  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:55.156354  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:55.156421  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:55.196778  188656 cri.go:89] found id: ""
	I0731 21:00:55.196807  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.196818  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:55.196826  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:55.196898  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:55.237575  188656 cri.go:89] found id: ""
	I0731 21:00:55.237605  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.237614  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:55.237620  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:55.237672  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:55.271717  188656 cri.go:89] found id: ""
	I0731 21:00:55.271746  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.271754  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:55.271760  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:55.271811  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:55.307586  188656 cri.go:89] found id: ""
	I0731 21:00:55.307618  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.307630  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:55.307637  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:55.307708  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:55.343325  188656 cri.go:89] found id: ""
	I0731 21:00:55.343352  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.343361  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:55.343367  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:55.343418  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:55.378959  188656 cri.go:89] found id: ""
	I0731 21:00:55.378988  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.378997  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:55.379008  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:55.379021  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:55.454213  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:55.454243  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:55.454260  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:55.532802  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:55.532839  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:55.575903  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:55.575940  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:55.635105  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:55.635140  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:58.149801  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:58.162682  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:58.162743  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:58.196220  188656 cri.go:89] found id: ""
	I0731 21:00:58.196245  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.196254  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:58.196260  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:58.196313  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:58.231052  188656 cri.go:89] found id: ""
	I0731 21:00:58.231083  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.231093  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:58.231099  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:58.231156  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:58.265569  188656 cri.go:89] found id: ""
	I0731 21:00:58.265599  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.265612  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:58.265633  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:58.265695  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:58.300750  188656 cri.go:89] found id: ""
	I0731 21:00:58.300779  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.300788  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:58.300793  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:58.300869  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:58.333920  188656 cri.go:89] found id: ""
	I0731 21:00:58.333949  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.333958  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:58.333963  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:58.334015  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:58.368732  188656 cri.go:89] found id: ""
	I0731 21:00:58.368759  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.368771  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:58.368787  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:58.368855  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:58.408454  188656 cri.go:89] found id: ""
	I0731 21:00:58.408488  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.408501  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:58.408510  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:58.408575  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:58.445855  188656 cri.go:89] found id: ""
	I0731 21:00:58.445888  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.445900  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:58.445913  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:58.445934  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:58.496144  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:58.496177  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:58.510708  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:58.510743  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:58.580690  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:58.580712  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:58.580725  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:58.657281  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:58.657320  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:55.591068  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:58.088264  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:57.610282  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:59.611376  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:59.017831  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:01.514115  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:01.196374  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:01.209044  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:01.209111  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:01.247313  188656 cri.go:89] found id: ""
	I0731 21:01:01.247343  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.247353  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:01.247360  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:01.247443  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:01.282269  188656 cri.go:89] found id: ""
	I0731 21:01:01.282300  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.282308  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:01.282314  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:01.282370  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:01.315598  188656 cri.go:89] found id: ""
	I0731 21:01:01.315628  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.315638  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:01.315644  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:01.315697  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:01.352492  188656 cri.go:89] found id: ""
	I0731 21:01:01.352521  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.352533  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:01.352540  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:01.352605  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:01.387858  188656 cri.go:89] found id: ""
	I0731 21:01:01.387885  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.387894  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:01.387900  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:01.387950  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:01.425014  188656 cri.go:89] found id: ""
	I0731 21:01:01.425042  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.425052  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:01.425061  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:01.425129  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:01.463068  188656 cri.go:89] found id: ""
	I0731 21:01:01.463098  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.463107  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:01.463113  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:01.463171  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:01.500174  188656 cri.go:89] found id: ""
	I0731 21:01:01.500203  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.500214  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:01.500229  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:01.500244  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:01.554350  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:01.554389  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:01.569353  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:01.569394  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:01.641074  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:01.641095  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:01.641108  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:01.722340  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:01.722377  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:00.088915  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:02.089981  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:02.109888  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:04.109951  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:04.015302  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:06.513535  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:04.264035  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:04.278374  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:04.278441  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:04.314037  188656 cri.go:89] found id: ""
	I0731 21:01:04.314068  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.314079  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:04.314087  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:04.314159  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:04.347604  188656 cri.go:89] found id: ""
	I0731 21:01:04.347635  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.347646  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:04.347653  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:04.347718  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:04.382412  188656 cri.go:89] found id: ""
	I0731 21:01:04.382442  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.382454  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:04.382462  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:04.382516  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:04.419097  188656 cri.go:89] found id: ""
	I0731 21:01:04.419130  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.419142  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:04.419150  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:04.419209  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:04.464561  188656 cri.go:89] found id: ""
	I0731 21:01:04.464592  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.464601  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:04.464607  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:04.464683  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:04.500484  188656 cri.go:89] found id: ""
	I0731 21:01:04.500510  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.500518  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:04.500524  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:04.500577  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:04.536211  188656 cri.go:89] found id: ""
	I0731 21:01:04.536239  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.536250  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:04.536257  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:04.536324  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:04.569521  188656 cri.go:89] found id: ""
	I0731 21:01:04.569548  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.569556  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:04.569567  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:04.569583  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:04.621228  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:04.621261  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:04.637500  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:04.637527  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:04.710577  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:04.710606  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:04.710623  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:04.788305  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:04.788343  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:07.329209  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:07.343021  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:07.343089  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:07.378556  188656 cri.go:89] found id: ""
	I0731 21:01:07.378588  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.378603  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:07.378610  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:07.378679  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:07.416419  188656 cri.go:89] found id: ""
	I0731 21:01:07.416455  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.416467  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:07.416474  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:07.416538  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:07.454720  188656 cri.go:89] found id: ""
	I0731 21:01:07.454749  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.454758  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:07.454764  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:07.454815  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:07.488963  188656 cri.go:89] found id: ""
	I0731 21:01:07.488995  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.489004  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:07.489009  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:07.489060  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:07.531916  188656 cri.go:89] found id: ""
	I0731 21:01:07.531949  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.531961  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:07.531967  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:07.532019  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:07.569233  188656 cri.go:89] found id: ""
	I0731 21:01:07.569266  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.569275  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:07.569281  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:07.569350  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:07.606318  188656 cri.go:89] found id: ""
	I0731 21:01:07.606349  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.606360  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:07.606368  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:07.606442  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:07.641408  188656 cri.go:89] found id: ""
	I0731 21:01:07.641436  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.641445  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:07.641454  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:07.641466  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:07.681094  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:07.681123  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:07.734600  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:07.734641  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:07.748747  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:07.748779  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:07.821775  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:07.821799  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:07.821816  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:04.590174  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:07.089655  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:06.110694  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:08.610381  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:10.611128  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:09.013688  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:11.513361  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:10.399973  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:10.412908  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:10.412986  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:10.448866  188656 cri.go:89] found id: ""
	I0731 21:01:10.448895  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.448903  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:10.448909  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:10.448966  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:10.486309  188656 cri.go:89] found id: ""
	I0731 21:01:10.486338  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.486346  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:10.486352  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:10.486411  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:10.522834  188656 cri.go:89] found id: ""
	I0731 21:01:10.522856  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.522863  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:10.522870  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:10.522929  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:10.558272  188656 cri.go:89] found id: ""
	I0731 21:01:10.558304  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.558324  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:10.558330  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:10.558391  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:10.596560  188656 cri.go:89] found id: ""
	I0731 21:01:10.596589  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.596600  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:10.596608  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:10.596668  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:10.633488  188656 cri.go:89] found id: ""
	I0731 21:01:10.633518  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.633529  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:10.633537  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:10.633597  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:10.665779  188656 cri.go:89] found id: ""
	I0731 21:01:10.665812  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.665824  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:10.665832  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:10.665895  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:10.700526  188656 cri.go:89] found id: ""
	I0731 21:01:10.700556  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.700564  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:10.700575  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:10.700587  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:10.753507  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:10.753550  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:10.768056  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:10.768089  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:10.842120  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:10.842142  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:10.842159  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:10.916532  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:10.916565  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:13.456826  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:13.471064  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:13.471130  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:13.505660  188656 cri.go:89] found id: ""
	I0731 21:01:13.505694  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.505707  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:13.505713  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:13.505775  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:13.543084  188656 cri.go:89] found id: ""
	I0731 21:01:13.543109  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.543117  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:13.543123  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:13.543182  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:13.578940  188656 cri.go:89] found id: ""
	I0731 21:01:13.578966  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.578974  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:13.578981  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:13.579047  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:13.617710  188656 cri.go:89] found id: ""
	I0731 21:01:13.617733  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.617740  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:13.617747  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:13.617810  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:13.653535  188656 cri.go:89] found id: ""
	I0731 21:01:13.653567  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.653579  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:13.653587  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:13.653658  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:13.687914  188656 cri.go:89] found id: ""
	I0731 21:01:13.687942  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.687953  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:13.687960  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:13.688031  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:13.725242  188656 cri.go:89] found id: ""
	I0731 21:01:13.725278  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.725287  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:13.725293  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:13.725372  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:13.760890  188656 cri.go:89] found id: ""
	I0731 21:01:13.760918  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.760929  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:13.760943  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:13.760958  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:13.810212  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:13.810252  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:13.824229  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:13.824259  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:01:09.588945  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:12.088514  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:14.088684  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:13.109760  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:15.109938  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:13.515603  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:16.013268  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	W0731 21:01:13.895306  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:13.895331  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:13.895344  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:13.976366  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:13.976411  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:16.520165  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:16.533970  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:16.534035  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:16.571444  188656 cri.go:89] found id: ""
	I0731 21:01:16.571474  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.571482  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:16.571488  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:16.571539  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:16.608150  188656 cri.go:89] found id: ""
	I0731 21:01:16.608176  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.608186  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:16.608194  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:16.608254  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:16.643252  188656 cri.go:89] found id: ""
	I0731 21:01:16.643283  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.643294  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:16.643302  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:16.643363  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:16.679521  188656 cri.go:89] found id: ""
	I0731 21:01:16.679552  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.679563  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:16.679571  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:16.679624  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:16.713502  188656 cri.go:89] found id: ""
	I0731 21:01:16.713532  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.713541  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:16.713547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:16.713624  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:16.748276  188656 cri.go:89] found id: ""
	I0731 21:01:16.748309  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.748318  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:16.748324  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:16.748383  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:16.783895  188656 cri.go:89] found id: ""
	I0731 21:01:16.783929  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.783940  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:16.783948  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:16.784014  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:16.817362  188656 cri.go:89] found id: ""
	I0731 21:01:16.817392  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.817415  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:16.817425  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:16.817440  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:16.872584  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:16.872637  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:16.887240  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:16.887275  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:16.961920  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:16.961949  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:16.961967  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:17.041889  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:17.041924  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:16.089420  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:18.089611  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:17.110442  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:19.111424  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:18.013772  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:20.514737  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:19.585935  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:19.600389  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:19.600475  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:19.635883  188656 cri.go:89] found id: ""
	I0731 21:01:19.635913  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.635924  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:19.635932  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:19.635995  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:19.674413  188656 cri.go:89] found id: ""
	I0731 21:01:19.674441  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.674459  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:19.674471  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:19.674538  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:19.708181  188656 cri.go:89] found id: ""
	I0731 21:01:19.708211  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.708219  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:19.708224  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:19.708292  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:19.744737  188656 cri.go:89] found id: ""
	I0731 21:01:19.744774  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.744783  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:19.744791  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:19.744849  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:19.784366  188656 cri.go:89] found id: ""
	I0731 21:01:19.784398  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.784406  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:19.784412  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:19.784465  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:19.819234  188656 cri.go:89] found id: ""
	I0731 21:01:19.819269  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.819280  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:19.819289  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:19.819355  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:19.851462  188656 cri.go:89] found id: ""
	I0731 21:01:19.851494  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.851503  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:19.851510  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:19.851563  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:19.896575  188656 cri.go:89] found id: ""
	I0731 21:01:19.896604  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.896612  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:19.896624  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:19.896640  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:19.952239  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:19.952284  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:19.969411  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:19.969442  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:20.042820  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:20.042847  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:20.042863  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:20.130070  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:20.130115  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:22.674956  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:22.688548  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:22.688616  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:22.728750  188656 cri.go:89] found id: ""
	I0731 21:01:22.728775  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.728784  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:22.728790  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:22.728844  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:22.763765  188656 cri.go:89] found id: ""
	I0731 21:01:22.763793  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.763801  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:22.763807  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:22.763858  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:22.799134  188656 cri.go:89] found id: ""
	I0731 21:01:22.799163  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.799172  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:22.799178  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:22.799237  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:22.833972  188656 cri.go:89] found id: ""
	I0731 21:01:22.833998  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.834005  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:22.834011  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:22.834060  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:22.869686  188656 cri.go:89] found id: ""
	I0731 21:01:22.869711  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.869719  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:22.869724  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:22.869776  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:22.907919  188656 cri.go:89] found id: ""
	I0731 21:01:22.907950  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.907961  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:22.907969  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:22.908035  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:22.947162  188656 cri.go:89] found id: ""
	I0731 21:01:22.947192  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.947204  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:22.947212  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:22.947273  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:22.992822  188656 cri.go:89] found id: ""
	I0731 21:01:22.992860  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.992872  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:22.992884  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:22.992900  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:23.045552  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:23.045589  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:23.059895  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:23.059925  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:23.135535  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:23.135561  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:23.135577  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:23.217468  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:23.217521  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:20.588507  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:22.588759  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:21.611467  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:24.110813  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:22.514805  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:25.012583  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:27.013095  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:25.771615  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:25.785037  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:25.785115  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:25.821070  188656 cri.go:89] found id: ""
	I0731 21:01:25.821100  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.821112  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:25.821120  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:25.821176  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:25.856174  188656 cri.go:89] found id: ""
	I0731 21:01:25.856206  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.856217  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:25.856225  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:25.856288  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:25.889440  188656 cri.go:89] found id: ""
	I0731 21:01:25.889473  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.889483  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:25.889490  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:25.889546  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:25.924770  188656 cri.go:89] found id: ""
	I0731 21:01:25.924796  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.924804  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:25.924811  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:25.924860  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:25.963529  188656 cri.go:89] found id: ""
	I0731 21:01:25.963576  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.963588  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:25.963595  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:25.963670  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:26.000033  188656 cri.go:89] found id: ""
	I0731 21:01:26.000060  188656 logs.go:276] 0 containers: []
	W0731 21:01:26.000069  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:26.000076  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:26.000133  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:26.035310  188656 cri.go:89] found id: ""
	I0731 21:01:26.035341  188656 logs.go:276] 0 containers: []
	W0731 21:01:26.035353  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:26.035359  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:26.035423  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:26.070096  188656 cri.go:89] found id: ""
	I0731 21:01:26.070119  188656 logs.go:276] 0 containers: []
	W0731 21:01:26.070127  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:26.070138  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:26.070149  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:26.141198  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:26.141220  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:26.141237  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:26.219766  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:26.219805  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:26.264836  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:26.264864  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:26.316672  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:26.316709  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:28.832882  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:24.588907  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:27.088961  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:29.089538  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:26.111336  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:28.609453  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:30.610379  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:29.014929  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:31.512827  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:28.846243  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:28.846307  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:28.880312  188656 cri.go:89] found id: ""
	I0731 21:01:28.880339  188656 logs.go:276] 0 containers: []
	W0731 21:01:28.880350  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:28.880358  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:28.880419  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:28.914625  188656 cri.go:89] found id: ""
	I0731 21:01:28.914652  188656 logs.go:276] 0 containers: []
	W0731 21:01:28.914660  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:28.914667  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:28.914726  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:28.949138  188656 cri.go:89] found id: ""
	I0731 21:01:28.949173  188656 logs.go:276] 0 containers: []
	W0731 21:01:28.949185  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:28.949192  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:28.949264  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:28.985229  188656 cri.go:89] found id: ""
	I0731 21:01:28.985258  188656 logs.go:276] 0 containers: []
	W0731 21:01:28.985266  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:28.985272  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:28.985326  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:29.021520  188656 cri.go:89] found id: ""
	I0731 21:01:29.021550  188656 logs.go:276] 0 containers: []
	W0731 21:01:29.021562  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:29.021568  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:29.021629  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:29.058639  188656 cri.go:89] found id: ""
	I0731 21:01:29.058671  188656 logs.go:276] 0 containers: []
	W0731 21:01:29.058682  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:29.058690  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:29.058755  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:29.105435  188656 cri.go:89] found id: ""
	I0731 21:01:29.105458  188656 logs.go:276] 0 containers: []
	W0731 21:01:29.105466  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:29.105472  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:29.105528  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:29.147118  188656 cri.go:89] found id: ""
	I0731 21:01:29.147144  188656 logs.go:276] 0 containers: []
	W0731 21:01:29.147152  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:29.147161  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:29.147177  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:29.231698  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:29.231735  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:29.276163  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:29.276200  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:29.330551  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:29.330589  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:29.350293  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:29.350323  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:29.456073  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:31.956964  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:31.970712  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:31.970780  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:32.009546  188656 cri.go:89] found id: ""
	I0731 21:01:32.009574  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.009585  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:32.009593  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:32.009674  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:32.046622  188656 cri.go:89] found id: ""
	I0731 21:01:32.046661  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.046672  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:32.046680  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:32.046748  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:32.080958  188656 cri.go:89] found id: ""
	I0731 21:01:32.080985  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.080993  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:32.080998  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:32.081052  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:32.117454  188656 cri.go:89] found id: ""
	I0731 21:01:32.117480  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.117489  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:32.117495  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:32.117561  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:32.152335  188656 cri.go:89] found id: ""
	I0731 21:01:32.152369  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.152380  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:32.152387  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:32.152441  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:32.186631  188656 cri.go:89] found id: ""
	I0731 21:01:32.186670  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.186682  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:32.186691  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:32.186761  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:32.221496  188656 cri.go:89] found id: ""
	I0731 21:01:32.221533  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.221544  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:32.221551  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:32.221632  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:32.256315  188656 cri.go:89] found id: ""
	I0731 21:01:32.256341  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.256350  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:32.256360  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:32.256372  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:32.295759  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:32.295788  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:32.347855  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:32.347888  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:32.360982  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:32.361012  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:32.433900  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:32.433926  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:32.433947  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:31.588474  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:33.590513  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:32.610672  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:35.110698  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:33.514600  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:36.013157  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:35.013369  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:35.027203  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:35.027298  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:35.065567  188656 cri.go:89] found id: ""
	I0731 21:01:35.065599  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.065610  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:35.065617  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:35.065686  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:35.104285  188656 cri.go:89] found id: ""
	I0731 21:01:35.104317  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.104328  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:35.104335  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:35.104430  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:35.151081  188656 cri.go:89] found id: ""
	I0731 21:01:35.151108  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.151119  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:35.151127  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:35.151190  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:35.196844  188656 cri.go:89] found id: ""
	I0731 21:01:35.196875  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.196886  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:35.196894  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:35.196964  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:35.253581  188656 cri.go:89] found id: ""
	I0731 21:01:35.253612  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.253623  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:35.253630  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:35.253703  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:35.295791  188656 cri.go:89] found id: ""
	I0731 21:01:35.295819  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.295830  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:35.295838  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:35.295904  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:35.329405  188656 cri.go:89] found id: ""
	I0731 21:01:35.329441  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.329454  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:35.329462  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:35.329526  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:35.363976  188656 cri.go:89] found id: ""
	I0731 21:01:35.364009  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.364022  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:35.364035  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:35.364051  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:35.421213  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:35.421253  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:35.436612  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:35.436646  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:35.514154  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:35.514182  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:35.514197  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:35.588048  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:35.588082  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:38.133466  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:38.147071  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:38.147142  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:38.179992  188656 cri.go:89] found id: ""
	I0731 21:01:38.180024  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.180036  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:38.180044  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:38.180116  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:38.213784  188656 cri.go:89] found id: ""
	I0731 21:01:38.213816  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.213827  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:38.213834  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:38.213901  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:38.254190  188656 cri.go:89] found id: ""
	I0731 21:01:38.254220  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.254229  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:38.254235  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:38.254284  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:38.289695  188656 cri.go:89] found id: ""
	I0731 21:01:38.289732  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.289743  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:38.289751  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:38.289819  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:38.327743  188656 cri.go:89] found id: ""
	I0731 21:01:38.327777  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.327788  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:38.327797  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:38.327853  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:38.361373  188656 cri.go:89] found id: ""
	I0731 21:01:38.361409  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.361421  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:38.361428  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:38.361501  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:38.396832  188656 cri.go:89] found id: ""
	I0731 21:01:38.396860  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.396868  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:38.396873  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:38.396923  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:38.431822  188656 cri.go:89] found id: ""
	I0731 21:01:38.431855  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.431868  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:38.431880  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:38.431895  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:38.481994  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:38.482028  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:38.495885  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:38.495911  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:38.563384  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:38.563411  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:38.563437  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:38.646806  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:38.646848  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:36.089465  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:38.590301  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:37.611057  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:40.110731  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:38.015769  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:40.513690  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:41.187323  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:41.200995  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:41.201063  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:41.241620  188656 cri.go:89] found id: ""
	I0731 21:01:41.241651  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.241663  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:41.241671  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:41.241745  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:41.279565  188656 cri.go:89] found id: ""
	I0731 21:01:41.279595  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.279604  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:41.279609  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:41.279666  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:41.320710  188656 cri.go:89] found id: ""
	I0731 21:01:41.320744  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.320755  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:41.320763  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:41.320834  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:41.356428  188656 cri.go:89] found id: ""
	I0731 21:01:41.356460  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.356472  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:41.356480  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:41.356544  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:41.390493  188656 cri.go:89] found id: ""
	I0731 21:01:41.390525  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.390536  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:41.390544  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:41.390612  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:41.424244  188656 cri.go:89] found id: ""
	I0731 21:01:41.424271  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.424282  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:41.424290  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:41.424350  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:41.459916  188656 cri.go:89] found id: ""
	I0731 21:01:41.459946  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.459955  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:41.459961  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:41.460012  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:41.493891  188656 cri.go:89] found id: ""
	I0731 21:01:41.493917  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.493926  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:41.493936  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:41.493950  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:41.544066  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:41.544106  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:41.558504  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:41.558534  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:41.632996  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:41.633021  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:41.633039  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:41.712637  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:41.712677  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:41.087979  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:43.088834  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:42.610136  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:45.109986  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:42.514059  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:44.514535  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:47.014970  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:44.255947  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:44.268961  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:44.269050  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:44.304621  188656 cri.go:89] found id: ""
	I0731 21:01:44.304656  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.304668  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:44.304676  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:44.304732  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:44.339389  188656 cri.go:89] found id: ""
	I0731 21:01:44.339429  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.339441  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:44.339448  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:44.339510  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:44.373069  188656 cri.go:89] found id: ""
	I0731 21:01:44.373095  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.373103  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:44.373110  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:44.373179  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:44.408784  188656 cri.go:89] found id: ""
	I0731 21:01:44.408812  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.408821  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:44.408829  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:44.408896  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:44.445636  188656 cri.go:89] found id: ""
	I0731 21:01:44.445671  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.445682  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:44.445690  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:44.445759  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:44.483529  188656 cri.go:89] found id: ""
	I0731 21:01:44.483565  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.483577  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:44.483585  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:44.483643  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:44.517959  188656 cri.go:89] found id: ""
	I0731 21:01:44.517980  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.517987  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:44.517993  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:44.518042  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:44.552322  188656 cri.go:89] found id: ""
	I0731 21:01:44.552367  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.552392  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:44.552405  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:44.552421  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:44.625005  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:44.625030  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:44.625043  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:44.702547  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:44.702585  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:44.741754  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:44.741792  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:44.795179  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:44.795216  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:47.309995  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:47.323993  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:47.324076  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:47.365546  188656 cri.go:89] found id: ""
	I0731 21:01:47.365576  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.365587  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:47.365595  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:47.365682  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:47.402774  188656 cri.go:89] found id: ""
	I0731 21:01:47.402810  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.402822  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:47.402831  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:47.402899  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:47.440716  188656 cri.go:89] found id: ""
	I0731 21:01:47.440746  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.440755  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:47.440761  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:47.440811  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:47.479418  188656 cri.go:89] found id: ""
	I0731 21:01:47.479450  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.479461  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:47.479469  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:47.479535  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:47.514027  188656 cri.go:89] found id: ""
	I0731 21:01:47.514065  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.514074  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:47.514081  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:47.514149  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:47.550178  188656 cri.go:89] found id: ""
	I0731 21:01:47.550203  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.550212  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:47.550218  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:47.550271  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:47.587844  188656 cri.go:89] found id: ""
	I0731 21:01:47.587873  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.587883  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:47.587891  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:47.587945  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:47.627581  188656 cri.go:89] found id: ""
	I0731 21:01:47.627608  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.627620  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:47.627633  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:47.627647  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:47.683364  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:47.683408  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:47.697882  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:47.697917  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:47.773804  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:47.773834  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:47.773848  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:47.859356  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:47.859404  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:45.090199  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:47.091328  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:47.610631  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:50.109476  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:49.514186  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:52.013486  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:50.402403  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:50.417269  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:50.417332  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:50.452762  188656 cri.go:89] found id: ""
	I0731 21:01:50.452786  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.452793  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:50.452799  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:50.452852  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:50.486741  188656 cri.go:89] found id: ""
	I0731 21:01:50.486771  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.486782  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:50.486789  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:50.486855  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:50.526144  188656 cri.go:89] found id: ""
	I0731 21:01:50.526174  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.526185  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:50.526193  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:50.526246  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:50.560957  188656 cri.go:89] found id: ""
	I0731 21:01:50.560985  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.560995  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:50.561003  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:50.561065  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:50.597228  188656 cri.go:89] found id: ""
	I0731 21:01:50.597258  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.597269  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:50.597275  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:50.597357  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:50.638153  188656 cri.go:89] found id: ""
	I0731 21:01:50.638183  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.638199  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:50.638208  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:50.638270  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:50.672236  188656 cri.go:89] found id: ""
	I0731 21:01:50.672266  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.672274  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:50.672280  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:50.672340  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:50.704069  188656 cri.go:89] found id: ""
	I0731 21:01:50.704093  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.704102  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:50.704112  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:50.704125  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:50.757973  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:50.758010  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:50.771203  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:50.771229  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:50.842937  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:50.842956  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:50.842969  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:50.925819  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:50.925857  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:53.470691  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:53.485260  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:53.485332  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:53.524110  188656 cri.go:89] found id: ""
	I0731 21:01:53.524139  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.524148  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:53.524154  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:53.524215  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:53.557642  188656 cri.go:89] found id: ""
	I0731 21:01:53.557668  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.557676  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:53.557682  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:53.557737  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:53.595594  188656 cri.go:89] found id: ""
	I0731 21:01:53.595622  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.595641  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:53.595647  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:53.595712  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:53.634458  188656 cri.go:89] found id: ""
	I0731 21:01:53.634487  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.634499  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:53.634507  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:53.634567  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:53.674124  188656 cri.go:89] found id: ""
	I0731 21:01:53.674149  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.674157  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:53.674164  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:53.674234  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:53.706861  188656 cri.go:89] found id: ""
	I0731 21:01:53.706888  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.706897  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:53.706903  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:53.706957  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:53.745476  188656 cri.go:89] found id: ""
	I0731 21:01:53.745504  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.745511  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:53.745522  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:53.745575  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:53.780847  188656 cri.go:89] found id: ""
	I0731 21:01:53.780878  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.780889  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:53.780902  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:53.780922  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:01:49.589017  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:52.088587  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:54.088885  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:52.109889  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:54.110634  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:54.014383  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:56.512884  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	W0731 21:01:53.853469  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:53.853497  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:53.853517  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:53.930506  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:53.930544  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:53.975439  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:53.975475  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:54.027903  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:54.027937  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:56.542860  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:56.557744  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:56.557813  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:56.596034  188656 cri.go:89] found id: ""
	I0731 21:01:56.596065  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.596075  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:56.596082  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:56.596146  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:56.631531  188656 cri.go:89] found id: ""
	I0731 21:01:56.631561  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.631572  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:56.631579  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:56.631653  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:56.665824  188656 cri.go:89] found id: ""
	I0731 21:01:56.665853  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.665865  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:56.665872  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:56.665940  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:56.698965  188656 cri.go:89] found id: ""
	I0731 21:01:56.698993  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.699002  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:56.699008  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:56.699074  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:56.735314  188656 cri.go:89] found id: ""
	I0731 21:01:56.735347  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.735359  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:56.735367  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:56.735443  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:56.770350  188656 cri.go:89] found id: ""
	I0731 21:01:56.770383  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.770393  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:56.770402  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:56.770485  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:56.808934  188656 cri.go:89] found id: ""
	I0731 21:01:56.808962  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.808970  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:56.808976  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:56.809027  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:56.845305  188656 cri.go:89] found id: ""
	I0731 21:01:56.845331  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.845354  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:56.845366  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:56.845383  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:56.922810  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:56.922832  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:56.922846  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:56.998009  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:56.998046  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:57.037905  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:57.037934  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:57.092438  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:57.092469  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:56.591334  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:59.089696  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:56.110825  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:58.111013  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:00.111696  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:58.513270  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:00.514474  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:59.608087  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:59.622465  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:59.622537  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:59.660221  188656 cri.go:89] found id: ""
	I0731 21:01:59.660254  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.660265  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:59.660274  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:59.660338  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:59.696158  188656 cri.go:89] found id: ""
	I0731 21:01:59.696193  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.696205  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:59.696213  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:59.696272  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:59.733607  188656 cri.go:89] found id: ""
	I0731 21:01:59.733635  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.733646  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:59.733656  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:59.733727  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:59.770298  188656 cri.go:89] found id: ""
	I0731 21:01:59.770327  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.770336  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:59.770342  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:59.770396  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:59.805630  188656 cri.go:89] found id: ""
	I0731 21:01:59.805659  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.805670  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:59.805682  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:59.805749  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:59.841064  188656 cri.go:89] found id: ""
	I0731 21:01:59.841089  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.841098  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:59.841106  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:59.841166  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:59.877237  188656 cri.go:89] found id: ""
	I0731 21:01:59.877265  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.877274  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:59.877284  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:59.877364  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:59.917102  188656 cri.go:89] found id: ""
	I0731 21:01:59.917138  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.917166  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:59.917179  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:59.917196  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:59.971806  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:59.971846  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:59.986267  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:59.986304  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:00.063185  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:00.063227  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:00.063244  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:00.148498  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:00.148541  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:02.690235  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:02.704623  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:02.704703  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:02.740557  188656 cri.go:89] found id: ""
	I0731 21:02:02.740588  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.740599  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:02.740606  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:02.740667  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:02.776340  188656 cri.go:89] found id: ""
	I0731 21:02:02.776382  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.776391  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:02.776396  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:02.776449  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:02.811645  188656 cri.go:89] found id: ""
	I0731 21:02:02.811673  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.811683  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:02.811691  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:02.811754  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:02.847226  188656 cri.go:89] found id: ""
	I0731 21:02:02.847259  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.847267  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:02.847273  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:02.847326  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:02.885591  188656 cri.go:89] found id: ""
	I0731 21:02:02.885617  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.885626  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:02.885631  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:02.885694  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:02.924250  188656 cri.go:89] found id: ""
	I0731 21:02:02.924281  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.924289  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:02.924296  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:02.924358  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:02.959608  188656 cri.go:89] found id: ""
	I0731 21:02:02.959638  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.959649  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:02.959657  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:02.959731  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:02.998175  188656 cri.go:89] found id: ""
	I0731 21:02:02.998205  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.998215  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:02.998228  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:02.998248  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:03.053320  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:03.053382  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:03.067681  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:03.067711  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:03.145222  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:03.145251  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:03.145270  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:03.228413  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:03.228456  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:01.590197  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:04.087692  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:02.610477  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:05.110544  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:03.016030  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:05.513082  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:05.780407  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:05.793872  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:05.793952  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:05.828940  188656 cri.go:89] found id: ""
	I0731 21:02:05.828971  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.828980  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:05.828987  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:05.829051  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:05.866470  188656 cri.go:89] found id: ""
	I0731 21:02:05.866503  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.866515  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:05.866522  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:05.866594  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:05.904756  188656 cri.go:89] found id: ""
	I0731 21:02:05.904792  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.904807  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:05.904814  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:05.904868  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:05.941534  188656 cri.go:89] found id: ""
	I0731 21:02:05.941564  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.941574  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:05.941581  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:05.941649  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:05.980413  188656 cri.go:89] found id: ""
	I0731 21:02:05.980453  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.980465  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:05.980472  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:05.980563  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:06.023226  188656 cri.go:89] found id: ""
	I0731 21:02:06.023258  188656 logs.go:276] 0 containers: []
	W0731 21:02:06.023269  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:06.023277  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:06.023345  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:06.061098  188656 cri.go:89] found id: ""
	I0731 21:02:06.061130  188656 logs.go:276] 0 containers: []
	W0731 21:02:06.061138  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:06.061145  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:06.061195  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:06.097825  188656 cri.go:89] found id: ""
	I0731 21:02:06.097852  188656 logs.go:276] 0 containers: []
	W0731 21:02:06.097860  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:06.097870  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:06.097883  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:06.149181  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:06.149223  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:06.164610  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:06.164651  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:06.248639  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:06.248666  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:06.248684  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:06.332445  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:06.332486  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:06.089967  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:08.588610  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:07.610691  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:09.611166  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:07.513999  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:09.514554  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:11.516493  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:08.873697  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:08.887632  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:08.887745  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:08.926002  188656 cri.go:89] found id: ""
	I0731 21:02:08.926032  188656 logs.go:276] 0 containers: []
	W0731 21:02:08.926042  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:08.926051  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:08.926117  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:08.962999  188656 cri.go:89] found id: ""
	I0731 21:02:08.963028  188656 logs.go:276] 0 containers: []
	W0731 21:02:08.963039  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:08.963047  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:08.963103  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:09.023016  188656 cri.go:89] found id: ""
	I0731 21:02:09.023043  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.023051  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:09.023057  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:09.023109  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:09.059672  188656 cri.go:89] found id: ""
	I0731 21:02:09.059699  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.059708  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:09.059714  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:09.059774  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:09.097603  188656 cri.go:89] found id: ""
	I0731 21:02:09.097635  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.097645  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:09.097653  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:09.097720  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:09.136210  188656 cri.go:89] found id: ""
	I0731 21:02:09.136240  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.136251  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:09.136259  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:09.136326  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:09.176167  188656 cri.go:89] found id: ""
	I0731 21:02:09.176204  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.176211  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:09.176218  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:09.176277  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:09.214151  188656 cri.go:89] found id: ""
	I0731 21:02:09.214180  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.214189  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:09.214199  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:09.214212  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:09.267579  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:09.267618  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:09.282420  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:09.282445  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:09.354067  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:09.354092  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:09.354111  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:09.433454  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:09.433500  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:11.979715  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:11.993050  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:11.993123  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:12.027731  188656 cri.go:89] found id: ""
	I0731 21:02:12.027759  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.027767  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:12.027773  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:12.027834  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:12.064410  188656 cri.go:89] found id: ""
	I0731 21:02:12.064442  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.064452  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:12.064459  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:12.064525  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:12.101061  188656 cri.go:89] found id: ""
	I0731 21:02:12.101096  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.101107  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:12.101115  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:12.101176  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:12.142240  188656 cri.go:89] found id: ""
	I0731 21:02:12.142271  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.142284  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:12.142292  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:12.142357  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:12.184949  188656 cri.go:89] found id: ""
	I0731 21:02:12.184980  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.184988  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:12.184994  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:12.185064  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:12.226031  188656 cri.go:89] found id: ""
	I0731 21:02:12.226068  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.226080  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:12.226089  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:12.226155  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:12.272880  188656 cri.go:89] found id: ""
	I0731 21:02:12.272913  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.272923  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:12.272931  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:12.272989  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:12.306968  188656 cri.go:89] found id: ""
	I0731 21:02:12.307011  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.307033  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:12.307068  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:12.307090  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:12.359357  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:12.359402  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:12.374817  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:12.374848  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:12.445107  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:12.445128  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:12.445141  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:12.530017  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:12.530058  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:11.088281  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:13.090442  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:12.110720  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:14.611142  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:14.013967  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:16.014021  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:15.070277  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:15.084326  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:15.084411  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:15.123513  188656 cri.go:89] found id: ""
	I0731 21:02:15.123549  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.123562  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:15.123569  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:15.123624  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:15.159855  188656 cri.go:89] found id: ""
	I0731 21:02:15.159888  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.159899  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:15.159908  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:15.159973  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:15.195879  188656 cri.go:89] found id: ""
	I0731 21:02:15.195911  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.195919  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:15.195926  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:15.195986  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:15.231216  188656 cri.go:89] found id: ""
	I0731 21:02:15.231249  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.231258  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:15.231265  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:15.231331  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:15.265711  188656 cri.go:89] found id: ""
	I0731 21:02:15.265740  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.265748  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:15.265754  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:15.265803  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:15.300991  188656 cri.go:89] found id: ""
	I0731 21:02:15.301020  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.301027  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:15.301033  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:15.301083  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:15.338507  188656 cri.go:89] found id: ""
	I0731 21:02:15.338533  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.338542  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:15.338550  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:15.338614  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:15.375540  188656 cri.go:89] found id: ""
	I0731 21:02:15.375583  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.375595  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:15.375606  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:15.375631  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:15.428903  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:15.428946  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:15.444018  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:15.444052  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:15.518807  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:15.518842  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:15.518859  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:15.602655  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:15.602693  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:18.158731  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:18.172861  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:18.172940  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:18.207451  188656 cri.go:89] found id: ""
	I0731 21:02:18.207480  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.207489  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:18.207495  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:18.207555  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:18.244974  188656 cri.go:89] found id: ""
	I0731 21:02:18.245004  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.245013  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:18.245019  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:18.245079  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:18.281589  188656 cri.go:89] found id: ""
	I0731 21:02:18.281622  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.281630  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:18.281637  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:18.281698  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:18.321413  188656 cri.go:89] found id: ""
	I0731 21:02:18.321445  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.321455  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:18.321461  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:18.321526  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:18.360600  188656 cri.go:89] found id: ""
	I0731 21:02:18.360627  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.360639  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:18.360647  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:18.360707  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:18.396312  188656 cri.go:89] found id: ""
	I0731 21:02:18.396344  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.396356  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:18.396364  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:18.396451  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:18.431586  188656 cri.go:89] found id: ""
	I0731 21:02:18.431618  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.431630  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:18.431637  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:18.431711  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:18.472995  188656 cri.go:89] found id: ""
	I0731 21:02:18.473025  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.473035  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:18.473047  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:18.473063  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:18.558826  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:18.558865  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:18.600083  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:18.600110  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:18.657944  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:18.657988  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:18.672860  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:18.672888  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:18.748806  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:15.589795  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:18.088699  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:17.112784  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:19.609312  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:18.513798  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:21.014437  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:21.249418  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:21.263304  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:21.263385  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:21.298591  188656 cri.go:89] found id: ""
	I0731 21:02:21.298624  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.298635  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:21.298643  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:21.298707  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:21.335913  188656 cri.go:89] found id: ""
	I0731 21:02:21.335939  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.335947  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:21.335954  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:21.336011  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:21.378314  188656 cri.go:89] found id: ""
	I0731 21:02:21.378347  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.378359  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:21.378368  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:21.378436  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:21.422707  188656 cri.go:89] found id: ""
	I0731 21:02:21.422738  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.422748  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:21.422757  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:21.422826  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:21.487851  188656 cri.go:89] found id: ""
	I0731 21:02:21.487878  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.487887  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:21.487893  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:21.487946  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:21.528944  188656 cri.go:89] found id: ""
	I0731 21:02:21.528970  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.528981  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:21.528990  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:21.529054  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:21.565091  188656 cri.go:89] found id: ""
	I0731 21:02:21.565118  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.565126  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:21.565132  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:21.565182  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:21.599985  188656 cri.go:89] found id: ""
	I0731 21:02:21.600015  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.600027  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:21.600041  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:21.600057  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:21.652065  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:21.652106  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:21.666497  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:21.666528  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:21.741853  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:21.741893  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:21.741919  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:21.822478  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:21.822517  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:20.089186  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:22.589558  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:21.610996  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:24.111590  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:23.513209  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:25.514400  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:24.363018  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:24.375640  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:24.375704  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:24.411383  188656 cri.go:89] found id: ""
	I0731 21:02:24.411416  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.411427  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:24.411436  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:24.411513  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:24.447536  188656 cri.go:89] found id: ""
	I0731 21:02:24.447565  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.447573  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:24.447578  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:24.447651  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:24.489270  188656 cri.go:89] found id: ""
	I0731 21:02:24.489301  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.489311  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:24.489320  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:24.489398  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:24.527891  188656 cri.go:89] found id: ""
	I0731 21:02:24.527922  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.527932  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:24.527938  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:24.527998  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:24.566854  188656 cri.go:89] found id: ""
	I0731 21:02:24.566886  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.566897  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:24.566904  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:24.566974  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:24.606234  188656 cri.go:89] found id: ""
	I0731 21:02:24.606267  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.606278  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:24.606285  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:24.606357  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:24.642880  188656 cri.go:89] found id: ""
	I0731 21:02:24.642909  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.642921  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:24.642929  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:24.642982  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:24.680069  188656 cri.go:89] found id: ""
	I0731 21:02:24.680101  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.680112  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:24.680124  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:24.680142  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:24.735337  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:24.735378  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:24.749010  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:24.749040  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:24.826406  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:24.826441  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:24.826458  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:24.906995  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:24.907049  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:27.451405  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:27.474178  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:27.474251  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:27.514912  188656 cri.go:89] found id: ""
	I0731 21:02:27.514938  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.514945  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:27.514951  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:27.515007  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:27.552850  188656 cri.go:89] found id: ""
	I0731 21:02:27.552880  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.552890  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:27.552896  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:27.552953  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:27.590468  188656 cri.go:89] found id: ""
	I0731 21:02:27.590496  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.590503  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:27.590509  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:27.590572  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:27.626295  188656 cri.go:89] found id: ""
	I0731 21:02:27.626322  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.626330  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:27.626339  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:27.626391  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:27.662654  188656 cri.go:89] found id: ""
	I0731 21:02:27.662690  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.662701  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:27.662708  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:27.662770  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:27.699528  188656 cri.go:89] found id: ""
	I0731 21:02:27.699558  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.699566  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:27.699572  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:27.699639  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:27.740501  188656 cri.go:89] found id: ""
	I0731 21:02:27.740528  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.740539  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:27.740547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:27.740613  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:27.778919  188656 cri.go:89] found id: ""
	I0731 21:02:27.778954  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.778966  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:27.778980  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:27.778999  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:27.815475  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:27.815500  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:27.866578  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:27.866615  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:27.880799  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:27.880830  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:27.948987  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:27.949014  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:27.949032  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:24.596180  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:27.088624  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:26.610897  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:29.110263  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:28.014828  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:30.514006  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:30.532314  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:30.546245  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:30.546317  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:30.581736  188656 cri.go:89] found id: ""
	I0731 21:02:30.581763  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.581772  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:30.581778  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:30.581837  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:30.618790  188656 cri.go:89] found id: ""
	I0731 21:02:30.618816  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.618824  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:30.618830  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:30.618886  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:30.654504  188656 cri.go:89] found id: ""
	I0731 21:02:30.654530  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.654538  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:30.654544  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:30.654603  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:30.690570  188656 cri.go:89] found id: ""
	I0731 21:02:30.690598  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.690609  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:30.690617  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:30.690683  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:30.739676  188656 cri.go:89] found id: ""
	I0731 21:02:30.739705  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.739715  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:30.739723  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:30.739789  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:30.777860  188656 cri.go:89] found id: ""
	I0731 21:02:30.777891  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.777902  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:30.777911  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:30.777995  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:30.814036  188656 cri.go:89] found id: ""
	I0731 21:02:30.814073  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.814088  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:30.814096  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:30.814168  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:30.847262  188656 cri.go:89] found id: ""
	I0731 21:02:30.847292  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.847304  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:30.847316  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:30.847338  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:30.898556  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:30.898596  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:30.912940  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:30.912974  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:30.987384  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:30.987405  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:30.987419  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:31.071376  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:31.071416  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:33.613677  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:33.628304  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:33.628380  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:33.662932  188656 cri.go:89] found id: ""
	I0731 21:02:33.662965  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.662977  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:33.662985  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:33.663055  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:33.697445  188656 cri.go:89] found id: ""
	I0731 21:02:33.697477  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.697487  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:33.697493  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:33.697553  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:33.734480  188656 cri.go:89] found id: ""
	I0731 21:02:33.734516  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.734527  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:33.734536  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:33.734614  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:33.770069  188656 cri.go:89] found id: ""
	I0731 21:02:33.770095  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.770104  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:33.770111  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:33.770194  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:33.806315  188656 cri.go:89] found id: ""
	I0731 21:02:33.806341  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.806350  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:33.806356  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:33.806408  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:29.592432  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:32.088842  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:34.089378  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:31.112420  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:33.611815  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:33.014022  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:35.014517  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:37.018514  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:33.842747  188656 cri.go:89] found id: ""
	I0731 21:02:33.842775  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.842782  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:33.842789  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:33.842856  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:33.877581  188656 cri.go:89] found id: ""
	I0731 21:02:33.877607  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.877616  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:33.877622  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:33.877682  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:33.913238  188656 cri.go:89] found id: ""
	I0731 21:02:33.913263  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.913271  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:33.913282  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:33.913298  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:33.967112  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:33.967148  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:33.980961  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:33.980994  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:34.054886  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:34.054917  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:34.054939  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:34.143088  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:34.143127  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:36.687110  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:36.700649  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:36.700725  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:36.737796  188656 cri.go:89] found id: ""
	I0731 21:02:36.737829  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.737841  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:36.737849  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:36.737916  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:36.773010  188656 cri.go:89] found id: ""
	I0731 21:02:36.773048  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.773059  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:36.773067  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:36.773136  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:36.813945  188656 cri.go:89] found id: ""
	I0731 21:02:36.813978  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.813988  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:36.813994  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:36.814047  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:36.849826  188656 cri.go:89] found id: ""
	I0731 21:02:36.849860  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.849872  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:36.849880  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:36.849943  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:36.887200  188656 cri.go:89] found id: ""
	I0731 21:02:36.887233  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.887244  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:36.887253  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:36.887391  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:36.922529  188656 cri.go:89] found id: ""
	I0731 21:02:36.922562  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.922573  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:36.922582  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:36.922644  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:36.958119  188656 cri.go:89] found id: ""
	I0731 21:02:36.958154  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.958166  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:36.958174  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:36.958240  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:37.001071  188656 cri.go:89] found id: ""
	I0731 21:02:37.001104  188656 logs.go:276] 0 containers: []
	W0731 21:02:37.001113  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:37.001123  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:37.001136  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:37.041248  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:37.041288  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:37.100519  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:37.100558  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:37.115157  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:37.115188  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:37.191232  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:37.191259  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:37.191277  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:36.588213  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:38.589224  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:36.109307  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:38.110675  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:40.111284  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:39.514052  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:42.013265  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:39.772834  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:39.788137  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:39.788203  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:39.827329  188656 cri.go:89] found id: ""
	I0731 21:02:39.827361  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.827371  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:39.827378  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:39.827458  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:39.864855  188656 cri.go:89] found id: ""
	I0731 21:02:39.864882  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.864889  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:39.864897  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:39.864958  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:39.901955  188656 cri.go:89] found id: ""
	I0731 21:02:39.901981  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.901990  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:39.901996  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:39.902059  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:39.941376  188656 cri.go:89] found id: ""
	I0731 21:02:39.941402  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.941412  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:39.941418  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:39.941473  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:39.975321  188656 cri.go:89] found id: ""
	I0731 21:02:39.975352  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.975364  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:39.975394  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:39.975465  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:40.010106  188656 cri.go:89] found id: ""
	I0731 21:02:40.010136  188656 logs.go:276] 0 containers: []
	W0731 21:02:40.010148  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:40.010157  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:40.010220  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:40.043963  188656 cri.go:89] found id: ""
	I0731 21:02:40.043997  188656 logs.go:276] 0 containers: []
	W0731 21:02:40.044009  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:40.044017  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:40.044089  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:40.079178  188656 cri.go:89] found id: ""
	I0731 21:02:40.079216  188656 logs.go:276] 0 containers: []
	W0731 21:02:40.079224  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:40.079234  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:40.079248  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:40.141115  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:40.141158  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:40.156722  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:40.156758  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:40.233758  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:40.233782  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:40.233797  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:40.317316  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:40.317375  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:42.858649  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:42.872135  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:42.872221  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:42.911966  188656 cri.go:89] found id: ""
	I0731 21:02:42.911998  188656 logs.go:276] 0 containers: []
	W0731 21:02:42.912007  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:42.912014  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:42.912081  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:42.950036  188656 cri.go:89] found id: ""
	I0731 21:02:42.950070  188656 logs.go:276] 0 containers: []
	W0731 21:02:42.950079  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:42.950085  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:42.950138  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:42.987201  188656 cri.go:89] found id: ""
	I0731 21:02:42.987233  188656 logs.go:276] 0 containers: []
	W0731 21:02:42.987245  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:42.987253  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:42.987326  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:43.027250  188656 cri.go:89] found id: ""
	I0731 21:02:43.027285  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.027297  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:43.027306  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:43.027374  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:43.063419  188656 cri.go:89] found id: ""
	I0731 21:02:43.063448  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.063456  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:43.063463  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:43.063527  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:43.101155  188656 cri.go:89] found id: ""
	I0731 21:02:43.101184  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.101193  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:43.101199  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:43.101249  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:43.142633  188656 cri.go:89] found id: ""
	I0731 21:02:43.142658  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.142667  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:43.142675  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:43.142741  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:43.177747  188656 cri.go:89] found id: ""
	I0731 21:02:43.177780  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.177789  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:43.177799  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:43.177813  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:43.228074  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:43.228114  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:43.242132  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:43.242165  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:43.313026  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:43.313054  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:43.313072  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:43.394620  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:43.394663  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:40.589306  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:42.589428  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:42.612236  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:45.110401  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:44.513370  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:46.514350  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:45.937932  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:45.951871  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:45.951964  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:45.987615  188656 cri.go:89] found id: ""
	I0731 21:02:45.987642  188656 logs.go:276] 0 containers: []
	W0731 21:02:45.987650  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:45.987656  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:45.987715  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:46.022632  188656 cri.go:89] found id: ""
	I0731 21:02:46.022659  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.022667  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:46.022674  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:46.022746  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:46.061153  188656 cri.go:89] found id: ""
	I0731 21:02:46.061182  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.061191  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:46.061196  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:46.061246  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:46.099168  188656 cri.go:89] found id: ""
	I0731 21:02:46.099197  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.099206  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:46.099212  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:46.099266  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:46.137269  188656 cri.go:89] found id: ""
	I0731 21:02:46.137300  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.137312  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:46.137321  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:46.137403  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:46.172330  188656 cri.go:89] found id: ""
	I0731 21:02:46.172391  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.172404  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:46.172417  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:46.172489  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:46.213314  188656 cri.go:89] found id: ""
	I0731 21:02:46.213358  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.213370  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:46.213378  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:46.213451  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:46.248663  188656 cri.go:89] found id: ""
	I0731 21:02:46.248697  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.248707  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:46.248719  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:46.248735  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:46.305433  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:46.305472  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:46.319065  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:46.319098  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:46.387025  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:46.387046  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:46.387058  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:46.476721  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:46.476769  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:44.589757  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:47.089954  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:47.112823  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:49.114163  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:49.014193  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:51.014760  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:49.020882  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:49.036502  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:49.036573  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:49.076478  188656 cri.go:89] found id: ""
	I0731 21:02:49.076509  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.076518  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:49.076525  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:49.076578  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:49.116065  188656 cri.go:89] found id: ""
	I0731 21:02:49.116098  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.116106  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:49.116112  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:49.116168  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:49.153237  188656 cri.go:89] found id: ""
	I0731 21:02:49.153274  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.153287  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:49.153295  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:49.153385  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:49.192821  188656 cri.go:89] found id: ""
	I0731 21:02:49.192849  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.192858  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:49.192864  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:49.192918  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:49.230627  188656 cri.go:89] found id: ""
	I0731 21:02:49.230660  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.230671  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:49.230679  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:49.230749  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:49.266575  188656 cri.go:89] found id: ""
	I0731 21:02:49.266603  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.266611  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:49.266617  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:49.266688  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:49.312489  188656 cri.go:89] found id: ""
	I0731 21:02:49.312522  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.312533  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:49.312541  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:49.312613  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:49.348907  188656 cri.go:89] found id: ""
	I0731 21:02:49.348932  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.348941  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:49.348950  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:49.348965  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:49.363229  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:49.363267  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:49.435708  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:49.435732  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:49.435745  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:49.522002  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:49.522047  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:49.566823  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:49.566868  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:52.122660  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:52.136559  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:52.136629  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:52.173198  188656 cri.go:89] found id: ""
	I0731 21:02:52.173227  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.173236  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:52.173242  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:52.173310  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:52.208464  188656 cri.go:89] found id: ""
	I0731 21:02:52.208503  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.208514  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:52.208521  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:52.208590  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:52.246052  188656 cri.go:89] found id: ""
	I0731 21:02:52.246084  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.246091  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:52.246098  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:52.246160  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:52.281798  188656 cri.go:89] found id: ""
	I0731 21:02:52.281831  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.281843  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:52.281852  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:52.281918  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:52.318924  188656 cri.go:89] found id: ""
	I0731 21:02:52.318954  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.318975  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:52.318983  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:52.319052  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:52.356752  188656 cri.go:89] found id: ""
	I0731 21:02:52.356788  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.356800  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:52.356809  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:52.356874  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:52.391507  188656 cri.go:89] found id: ""
	I0731 21:02:52.391537  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.391545  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:52.391551  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:52.391602  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:52.430714  188656 cri.go:89] found id: ""
	I0731 21:02:52.430749  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.430761  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:52.430774  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:52.430792  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:52.482600  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:52.482629  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:52.535317  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:52.535361  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:52.549835  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:52.549874  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:52.628319  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:52.628347  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:52.628365  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:49.590499  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:52.089170  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:54.089832  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:51.610237  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:54.112782  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:53.513932  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:55.516784  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:55.216678  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:55.231142  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:55.231225  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:55.266283  188656 cri.go:89] found id: ""
	I0731 21:02:55.266321  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.266334  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:55.266341  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:55.266399  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:55.301457  188656 cri.go:89] found id: ""
	I0731 21:02:55.301493  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.301506  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:55.301514  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:55.301574  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:55.338427  188656 cri.go:89] found id: ""
	I0731 21:02:55.338453  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.338461  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:55.338467  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:55.338521  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:55.373718  188656 cri.go:89] found id: ""
	I0731 21:02:55.373748  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.373757  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:55.373764  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:55.373846  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:55.410989  188656 cri.go:89] found id: ""
	I0731 21:02:55.411022  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.411034  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:55.411042  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:55.411100  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:55.452867  188656 cri.go:89] found id: ""
	I0731 21:02:55.452904  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.452915  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:55.452924  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:55.452995  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:55.512781  188656 cri.go:89] found id: ""
	I0731 21:02:55.512809  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.512821  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:55.512829  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:55.512894  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:55.550460  188656 cri.go:89] found id: ""
	I0731 21:02:55.550487  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.550495  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:55.550505  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:55.550521  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:55.625776  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:55.625804  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:55.625821  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:55.711276  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:55.711322  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:55.765078  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:55.765111  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:55.818131  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:55.818176  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:58.332914  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:58.346908  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:58.346992  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:58.383641  188656 cri.go:89] found id: ""
	I0731 21:02:58.383686  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.383695  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:58.383700  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:58.383753  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:58.419538  188656 cri.go:89] found id: ""
	I0731 21:02:58.419566  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.419576  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:58.419584  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:58.419649  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:58.457036  188656 cri.go:89] found id: ""
	I0731 21:02:58.457069  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.457080  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:58.457088  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:58.457162  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:58.497596  188656 cri.go:89] found id: ""
	I0731 21:02:58.497621  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.497629  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:58.497635  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:58.497706  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:58.538184  188656 cri.go:89] found id: ""
	I0731 21:02:58.538211  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.538220  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:58.538226  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:58.538291  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:58.584428  188656 cri.go:89] found id: ""
	I0731 21:02:58.584457  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.584468  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:58.584476  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:58.584537  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:58.625052  188656 cri.go:89] found id: ""
	I0731 21:02:58.625084  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.625096  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:58.625103  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:58.625171  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:58.662222  188656 cri.go:89] found id: ""
	I0731 21:02:58.662248  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.662256  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:58.662266  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:58.662278  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:58.740491  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:58.740530  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:58.782685  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:58.782714  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:58.833620  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:58.833668  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:56.091277  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:58.589516  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:56.609399  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:58.610957  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:58.013927  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:00.015179  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:58.848679  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:58.848713  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:58.925496  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:01.426171  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:01.440261  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:01.440341  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:01.477362  188656 cri.go:89] found id: ""
	I0731 21:03:01.477393  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.477405  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:01.477414  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:01.477483  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:01.516640  188656 cri.go:89] found id: ""
	I0731 21:03:01.516675  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.516692  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:01.516701  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:01.516764  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:01.560713  188656 cri.go:89] found id: ""
	I0731 21:03:01.560744  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.560756  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:01.560762  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:01.560844  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:01.604050  188656 cri.go:89] found id: ""
	I0731 21:03:01.604086  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.604097  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:01.604105  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:01.604170  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:01.641358  188656 cri.go:89] found id: ""
	I0731 21:03:01.641391  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.641401  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:01.641406  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:01.641471  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:01.677332  188656 cri.go:89] found id: ""
	I0731 21:03:01.677380  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.677390  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:01.677397  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:01.677459  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:01.713781  188656 cri.go:89] found id: ""
	I0731 21:03:01.713815  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.713826  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:01.713833  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:01.713914  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:01.757499  188656 cri.go:89] found id: ""
	I0731 21:03:01.757543  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.757552  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:01.757563  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:01.757575  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:01.832330  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:01.832370  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:01.832384  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:01.918996  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:01.919050  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:01.979268  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:01.979307  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:02.037528  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:02.037564  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:00.591373  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:03.089405  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:01.110471  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:03.611348  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:02.513998  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:05.015060  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:04.552758  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:04.566881  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:04.566960  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:04.604631  188656 cri.go:89] found id: ""
	I0731 21:03:04.604669  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.604680  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:04.604688  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:04.604791  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:04.644027  188656 cri.go:89] found id: ""
	I0731 21:03:04.644052  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.644061  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:04.644068  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:04.644134  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:04.680010  188656 cri.go:89] found id: ""
	I0731 21:03:04.680037  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.680045  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:04.680050  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:04.680102  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:04.717095  188656 cri.go:89] found id: ""
	I0731 21:03:04.717123  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.717133  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:04.717140  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:04.717212  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:04.755297  188656 cri.go:89] found id: ""
	I0731 21:03:04.755324  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.755331  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:04.755337  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:04.755387  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:04.792073  188656 cri.go:89] found id: ""
	I0731 21:03:04.792104  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.792113  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:04.792119  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:04.792168  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:04.828428  188656 cri.go:89] found id: ""
	I0731 21:03:04.828460  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.828468  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:04.828475  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:04.828541  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:04.863871  188656 cri.go:89] found id: ""
	I0731 21:03:04.863905  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.863916  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:04.863929  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:04.863946  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:04.879591  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:04.879626  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:04.962199  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:04.962227  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:04.962245  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:05.048502  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:05.048547  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:05.090812  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:05.090838  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:07.647307  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:07.664586  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:07.664656  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:07.719851  188656 cri.go:89] found id: ""
	I0731 21:03:07.719887  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.719899  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:07.719908  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:07.719978  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:07.778295  188656 cri.go:89] found id: ""
	I0731 21:03:07.778330  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.778343  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:07.778350  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:07.778417  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:07.817911  188656 cri.go:89] found id: ""
	I0731 21:03:07.817937  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.817947  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:07.817954  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:07.818004  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:07.853177  188656 cri.go:89] found id: ""
	I0731 21:03:07.853211  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.853222  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:07.853229  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:07.853308  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:07.888992  188656 cri.go:89] found id: ""
	I0731 21:03:07.889020  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.889046  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:07.889055  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:07.889133  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:07.924327  188656 cri.go:89] found id: ""
	I0731 21:03:07.924358  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.924369  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:07.924377  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:07.924461  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:07.964438  188656 cri.go:89] found id: ""
	I0731 21:03:07.964470  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.964480  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:07.964489  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:07.964572  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:08.003566  188656 cri.go:89] found id: ""
	I0731 21:03:08.003610  188656 logs.go:276] 0 containers: []
	W0731 21:03:08.003621  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:08.003634  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:08.003651  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:08.044246  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:08.044286  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:08.097479  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:08.097517  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:08.113636  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:08.113663  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:08.187217  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:08.187244  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:08.187261  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:05.090205  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:07.589488  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:06.110184  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:08.111598  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:10.611986  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:07.513036  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:09.513637  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:11.514176  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:10.771248  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:10.786159  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:10.786232  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:10.823724  188656 cri.go:89] found id: ""
	I0731 21:03:10.823756  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.823769  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:10.823777  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:10.823846  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:10.862440  188656 cri.go:89] found id: ""
	I0731 21:03:10.862468  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.862480  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:10.862488  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:10.862544  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:10.901499  188656 cri.go:89] found id: ""
	I0731 21:03:10.901527  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.901539  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:10.901547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:10.901611  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:10.940255  188656 cri.go:89] found id: ""
	I0731 21:03:10.940279  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.940287  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:10.940293  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:10.940356  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:10.975315  188656 cri.go:89] found id: ""
	I0731 21:03:10.975344  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.975353  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:10.975360  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:10.975420  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:11.011453  188656 cri.go:89] found id: ""
	I0731 21:03:11.011482  188656 logs.go:276] 0 containers: []
	W0731 21:03:11.011538  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:11.011549  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:11.011611  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:11.047846  188656 cri.go:89] found id: ""
	I0731 21:03:11.047887  188656 logs.go:276] 0 containers: []
	W0731 21:03:11.047899  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:11.047907  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:11.047972  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:11.086243  188656 cri.go:89] found id: ""
	I0731 21:03:11.086271  188656 logs.go:276] 0 containers: []
	W0731 21:03:11.086282  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:11.086293  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:11.086309  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:11.139390  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:11.139430  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:11.154637  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:11.154669  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:11.225996  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:11.226019  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:11.226035  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:11.305235  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:11.305280  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:09.589831  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:11.590312  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:14.089750  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:13.110191  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:15.112258  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:14.013609  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:16.014143  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:13.845792  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:13.859185  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:13.859261  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:13.896017  188656 cri.go:89] found id: ""
	I0731 21:03:13.896047  188656 logs.go:276] 0 containers: []
	W0731 21:03:13.896055  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:13.896061  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:13.896123  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:13.932442  188656 cri.go:89] found id: ""
	I0731 21:03:13.932475  188656 logs.go:276] 0 containers: []
	W0731 21:03:13.932486  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:13.932494  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:13.932564  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:13.971233  188656 cri.go:89] found id: ""
	I0731 21:03:13.971265  188656 logs.go:276] 0 containers: []
	W0731 21:03:13.971274  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:13.971280  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:13.971331  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:14.009757  188656 cri.go:89] found id: ""
	I0731 21:03:14.009787  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.009796  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:14.009805  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:14.009870  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:14.047946  188656 cri.go:89] found id: ""
	I0731 21:03:14.047979  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.047990  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:14.047998  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:14.048056  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:14.084687  188656 cri.go:89] found id: ""
	I0731 21:03:14.084720  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.084731  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:14.084739  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:14.084805  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:14.124831  188656 cri.go:89] found id: ""
	I0731 21:03:14.124861  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.124870  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:14.124876  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:14.124929  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:14.161242  188656 cri.go:89] found id: ""
	I0731 21:03:14.161275  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.161286  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:14.161295  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:14.161308  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:14.241060  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:14.241115  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:14.282382  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:14.282414  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:14.335201  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:14.335249  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:14.351345  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:14.351379  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:14.436524  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:16.937313  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:16.951403  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:16.951490  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:16.991735  188656 cri.go:89] found id: ""
	I0731 21:03:16.991766  188656 logs.go:276] 0 containers: []
	W0731 21:03:16.991777  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:16.991785  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:16.991852  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:17.030327  188656 cri.go:89] found id: ""
	I0731 21:03:17.030353  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.030360  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:17.030366  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:17.030419  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:17.068161  188656 cri.go:89] found id: ""
	I0731 21:03:17.068195  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.068206  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:17.068214  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:17.068286  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:17.105561  188656 cri.go:89] found id: ""
	I0731 21:03:17.105590  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.105601  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:17.105609  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:17.105684  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:17.144503  188656 cri.go:89] found id: ""
	I0731 21:03:17.144529  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.144540  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:17.144547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:17.144610  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:17.183709  188656 cri.go:89] found id: ""
	I0731 21:03:17.183738  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.183747  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:17.183753  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:17.183815  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:17.222083  188656 cri.go:89] found id: ""
	I0731 21:03:17.222109  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.222117  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:17.222124  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:17.222178  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:17.259503  188656 cri.go:89] found id: ""
	I0731 21:03:17.259534  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.259547  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:17.259561  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:17.259578  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:17.300603  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:17.300642  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:17.352194  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:17.352235  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:17.367179  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:17.367209  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:17.440051  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:17.440074  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:17.440088  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:16.589914  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:18.082985  188133 pod_ready.go:81] duration metric: took 4m0.000734125s for pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace to be "Ready" ...
	E0731 21:03:18.083015  188133 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0731 21:03:18.083039  188133 pod_ready.go:38] duration metric: took 4m12.543404692s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:03:18.083069  188133 kubeadm.go:597] duration metric: took 4m20.473129745s to restartPrimaryControlPlane
	W0731 21:03:18.083176  188133 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 21:03:18.083210  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:03:17.610274  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:19.611592  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:18.514266  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:20.514967  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:20.027644  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:20.041735  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:20.041826  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:20.077436  188656 cri.go:89] found id: ""
	I0731 21:03:20.077470  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.077483  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:20.077491  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:20.077558  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:20.117420  188656 cri.go:89] found id: ""
	I0731 21:03:20.117449  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.117459  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:20.117466  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:20.117533  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:20.157794  188656 cri.go:89] found id: ""
	I0731 21:03:20.157827  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.157838  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:20.157847  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:20.157914  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:20.193760  188656 cri.go:89] found id: ""
	I0731 21:03:20.193788  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.193796  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:20.193803  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:20.193856  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:20.231731  188656 cri.go:89] found id: ""
	I0731 21:03:20.231764  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.231777  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:20.231785  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:20.231856  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:20.268666  188656 cri.go:89] found id: ""
	I0731 21:03:20.268697  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.268709  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:20.268717  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:20.268786  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:20.304355  188656 cri.go:89] found id: ""
	I0731 21:03:20.304392  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.304406  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:20.304414  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:20.304478  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:20.343886  188656 cri.go:89] found id: ""
	I0731 21:03:20.343915  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.343927  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:20.343940  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:20.343957  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:20.358460  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:20.358494  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:20.435473  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:20.435499  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:20.435522  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:20.517961  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:20.518002  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:20.561528  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:20.561567  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:23.119570  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:23.134276  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:23.134366  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:23.172808  188656 cri.go:89] found id: ""
	I0731 21:03:23.172837  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.172846  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:23.172852  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:23.172914  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:23.208038  188656 cri.go:89] found id: ""
	I0731 21:03:23.208067  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.208080  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:23.208086  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:23.208140  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:23.244493  188656 cri.go:89] found id: ""
	I0731 21:03:23.244523  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.244533  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:23.244539  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:23.244605  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:23.280474  188656 cri.go:89] found id: ""
	I0731 21:03:23.280503  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.280510  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:23.280517  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:23.280581  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:23.317381  188656 cri.go:89] found id: ""
	I0731 21:03:23.317415  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.317428  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:23.317441  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:23.317511  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:23.357023  188656 cri.go:89] found id: ""
	I0731 21:03:23.357051  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.357062  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:23.357071  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:23.357134  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:23.400176  188656 cri.go:89] found id: ""
	I0731 21:03:23.400211  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.400223  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:23.400230  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:23.400298  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:23.440157  188656 cri.go:89] found id: ""
	I0731 21:03:23.440190  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.440201  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:23.440213  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:23.440234  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:23.494762  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:23.494802  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:23.511463  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:23.511510  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:23.600359  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:23.600383  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:23.600403  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:23.682683  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:23.682723  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:22.111495  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:24.112248  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:23.013460  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:25.014605  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:27.014900  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:26.225923  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:26.245708  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:26.245791  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:26.282882  188656 cri.go:89] found id: ""
	I0731 21:03:26.282910  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.282920  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:26.282928  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:26.282987  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:26.324227  188656 cri.go:89] found id: ""
	I0731 21:03:26.324268  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.324279  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:26.324287  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:26.324349  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:26.365996  188656 cri.go:89] found id: ""
	I0731 21:03:26.366027  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.366038  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:26.366047  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:26.366119  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:26.403790  188656 cri.go:89] found id: ""
	I0731 21:03:26.403823  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.403835  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:26.403844  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:26.403915  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:26.442924  188656 cri.go:89] found id: ""
	I0731 21:03:26.442947  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.442957  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:26.442964  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:26.443026  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:26.482260  188656 cri.go:89] found id: ""
	I0731 21:03:26.482286  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.482294  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:26.482300  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:26.482364  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:26.526385  188656 cri.go:89] found id: ""
	I0731 21:03:26.526420  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.526432  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:26.526442  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:26.526511  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:26.565217  188656 cri.go:89] found id: ""
	I0731 21:03:26.565250  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.565262  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:26.565275  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:26.565294  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:26.623437  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:26.623478  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:26.639642  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:26.639683  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:26.720274  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:26.720309  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:26.720325  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:26.799689  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:26.799728  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:26.111147  188266 pod_ready.go:81] duration metric: took 4m0.007359775s for pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace to be "Ready" ...
	E0731 21:03:26.111173  188266 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0731 21:03:26.111180  188266 pod_ready.go:38] duration metric: took 4m2.82978193s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:03:26.111195  188266 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:03:26.111220  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:26.111267  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:26.179210  188266 cri.go:89] found id: "89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:26.179240  188266 cri.go:89] found id: ""
	I0731 21:03:26.179251  188266 logs.go:276] 1 containers: [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718]
	I0731 21:03:26.179315  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.184349  188266 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:26.184430  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:26.221238  188266 cri.go:89] found id: "d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:26.221267  188266 cri.go:89] found id: ""
	I0731 21:03:26.221277  188266 logs.go:276] 1 containers: [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c]
	I0731 21:03:26.221349  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.225908  188266 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:26.225985  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:26.276864  188266 cri.go:89] found id: "987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:26.276895  188266 cri.go:89] found id: ""
	I0731 21:03:26.276907  188266 logs.go:276] 1 containers: [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025]
	I0731 21:03:26.276974  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.281921  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:26.282003  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:26.320868  188266 cri.go:89] found id: "936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:26.320903  188266 cri.go:89] found id: ""
	I0731 21:03:26.320914  188266 logs.go:276] 1 containers: [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447]
	I0731 21:03:26.320984  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.326203  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:26.326272  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:26.378409  188266 cri.go:89] found id: "c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:26.378433  188266 cri.go:89] found id: ""
	I0731 21:03:26.378442  188266 logs.go:276] 1 containers: [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e]
	I0731 21:03:26.378504  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.384006  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:26.384111  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:26.431113  188266 cri.go:89] found id: "c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:26.431147  188266 cri.go:89] found id: ""
	I0731 21:03:26.431158  188266 logs.go:276] 1 containers: [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085]
	I0731 21:03:26.431226  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.437136  188266 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:26.437213  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:26.484223  188266 cri.go:89] found id: ""
	I0731 21:03:26.484247  188266 logs.go:276] 0 containers: []
	W0731 21:03:26.484257  188266 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:26.484263  188266 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:03:26.484319  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:03:26.530433  188266 cri.go:89] found id: "701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:26.530470  188266 cri.go:89] found id: "23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:26.530476  188266 cri.go:89] found id: ""
	I0731 21:03:26.530486  188266 logs.go:276] 2 containers: [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f]
	I0731 21:03:26.530551  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.535747  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.541379  188266 logs.go:123] Gathering logs for storage-provisioner [23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f] ...
	I0731 21:03:26.541406  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:26.586730  188266 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:26.586769  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:27.133617  188266 logs.go:123] Gathering logs for kube-apiserver [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718] ...
	I0731 21:03:27.133672  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:27.183805  188266 logs.go:123] Gathering logs for etcd [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c] ...
	I0731 21:03:27.183846  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:27.226579  188266 logs.go:123] Gathering logs for kube-proxy [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e] ...
	I0731 21:03:27.226620  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:27.290635  188266 logs.go:123] Gathering logs for coredns [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025] ...
	I0731 21:03:27.290671  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:27.330700  188266 logs.go:123] Gathering logs for kube-scheduler [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447] ...
	I0731 21:03:27.330732  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:27.370882  188266 logs.go:123] Gathering logs for kube-controller-manager [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085] ...
	I0731 21:03:27.370918  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:27.426426  188266 logs.go:123] Gathering logs for storage-provisioner [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173] ...
	I0731 21:03:27.426471  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:27.466359  188266 logs.go:123] Gathering logs for container status ...
	I0731 21:03:27.466396  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:27.515202  188266 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:27.515235  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:27.569081  188266 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:27.569122  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:27.586776  188266 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:27.586809  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:03:30.223314  188266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:30.241046  188266 api_server.go:72] duration metric: took 4m14.179869513s to wait for apiserver process to appear ...
	I0731 21:03:30.241073  188266 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:03:30.241118  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:30.241188  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:30.281267  188266 cri.go:89] found id: "89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:30.281303  188266 cri.go:89] found id: ""
	I0731 21:03:30.281314  188266 logs.go:276] 1 containers: [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718]
	I0731 21:03:30.281397  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.285857  188266 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:30.285927  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:30.321742  188266 cri.go:89] found id: "d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:30.321770  188266 cri.go:89] found id: ""
	I0731 21:03:30.321779  188266 logs.go:276] 1 containers: [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c]
	I0731 21:03:30.321841  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.326210  188266 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:30.326284  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:30.367998  188266 cri.go:89] found id: "987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:30.368025  188266 cri.go:89] found id: ""
	I0731 21:03:30.368036  188266 logs.go:276] 1 containers: [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025]
	I0731 21:03:30.368101  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.372340  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:30.372412  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:30.413689  188266 cri.go:89] found id: "936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:30.413714  188266 cri.go:89] found id: ""
	I0731 21:03:30.413725  188266 logs.go:276] 1 containers: [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447]
	I0731 21:03:30.413789  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.418525  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:30.418604  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:30.458505  188266 cri.go:89] found id: "c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:30.458530  188266 cri.go:89] found id: ""
	I0731 21:03:30.458539  188266 logs.go:276] 1 containers: [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e]
	I0731 21:03:30.458587  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.462993  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:30.463058  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:30.500683  188266 cri.go:89] found id: "c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:30.500711  188266 cri.go:89] found id: ""
	I0731 21:03:30.500722  188266 logs.go:276] 1 containers: [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085]
	I0731 21:03:30.500785  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.506197  188266 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:30.506277  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:30.545243  188266 cri.go:89] found id: ""
	I0731 21:03:30.545273  188266 logs.go:276] 0 containers: []
	W0731 21:03:30.545284  188266 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:30.545290  188266 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:03:30.545371  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:03:30.588405  188266 cri.go:89] found id: "701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:30.588459  188266 cri.go:89] found id: "23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:30.588465  188266 cri.go:89] found id: ""
	I0731 21:03:30.588474  188266 logs.go:276] 2 containers: [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f]
	I0731 21:03:30.588539  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.593611  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.599345  188266 logs.go:123] Gathering logs for kube-proxy [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e] ...
	I0731 21:03:30.599386  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:30.641530  188266 logs.go:123] Gathering logs for kube-controller-manager [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085] ...
	I0731 21:03:30.641564  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:30.703655  188266 logs.go:123] Gathering logs for storage-provisioner [23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f] ...
	I0731 21:03:30.703692  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:30.744119  188266 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:30.744147  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:29.515238  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:32.014503  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:29.351214  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:29.365487  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:29.365561  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:29.402989  188656 cri.go:89] found id: ""
	I0731 21:03:29.403015  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.403022  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:29.403028  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:29.403079  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:29.443276  188656 cri.go:89] found id: ""
	I0731 21:03:29.443310  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.443321  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:29.443329  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:29.443397  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:29.483285  188656 cri.go:89] found id: ""
	I0731 21:03:29.483311  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.483319  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:29.483326  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:29.483384  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:29.522285  188656 cri.go:89] found id: ""
	I0731 21:03:29.522317  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.522329  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:29.522337  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:29.522406  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:29.565115  188656 cri.go:89] found id: ""
	I0731 21:03:29.565145  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.565155  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:29.565163  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:29.565233  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:29.603768  188656 cri.go:89] found id: ""
	I0731 21:03:29.603805  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.603816  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:29.603822  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:29.603875  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:29.640380  188656 cri.go:89] found id: ""
	I0731 21:03:29.640406  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.640416  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:29.640424  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:29.640493  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:29.679699  188656 cri.go:89] found id: ""
	I0731 21:03:29.679727  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.679736  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:29.679749  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:29.679764  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:29.735555  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:29.735603  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:29.749670  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:29.749708  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:29.825950  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:29.825973  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:29.825989  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:29.915420  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:29.915463  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:32.462996  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:32.478659  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:32.478739  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:32.528625  188656 cri.go:89] found id: ""
	I0731 21:03:32.528651  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.528659  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:32.528665  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:32.528724  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:32.574371  188656 cri.go:89] found id: ""
	I0731 21:03:32.574399  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.574408  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:32.574414  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:32.574474  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:32.616916  188656 cri.go:89] found id: ""
	I0731 21:03:32.616960  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.616970  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:32.616975  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:32.617040  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:32.657725  188656 cri.go:89] found id: ""
	I0731 21:03:32.657758  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.657769  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:32.657777  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:32.657842  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:32.693197  188656 cri.go:89] found id: ""
	I0731 21:03:32.693226  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.693237  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:32.693245  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:32.693316  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:32.733567  188656 cri.go:89] found id: ""
	I0731 21:03:32.733594  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.733602  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:32.733608  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:32.733670  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:32.774624  188656 cri.go:89] found id: ""
	I0731 21:03:32.774659  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.774671  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:32.774679  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:32.774747  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:32.811755  188656 cri.go:89] found id: ""
	I0731 21:03:32.811790  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.811809  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:32.811822  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:32.811835  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:32.825512  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:32.825544  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:32.902310  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:32.902339  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:32.902366  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:32.983347  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:32.983391  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:33.028037  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:33.028068  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:31.165988  188266 logs.go:123] Gathering logs for container status ...
	I0731 21:03:31.166042  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:31.209564  188266 logs.go:123] Gathering logs for kube-apiserver [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718] ...
	I0731 21:03:31.209605  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:31.254061  188266 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:31.254105  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:31.269227  188266 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:31.269266  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:03:31.394442  188266 logs.go:123] Gathering logs for etcd [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c] ...
	I0731 21:03:31.394477  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:31.439011  188266 logs.go:123] Gathering logs for coredns [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025] ...
	I0731 21:03:31.439047  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:31.476798  188266 logs.go:123] Gathering logs for kube-scheduler [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447] ...
	I0731 21:03:31.476825  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:31.524460  188266 logs.go:123] Gathering logs for storage-provisioner [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173] ...
	I0731 21:03:31.524491  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:31.564254  188266 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:31.564288  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:34.122836  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 21:03:34.128516  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 200:
	ok
	I0731 21:03:34.129484  188266 api_server.go:141] control plane version: v1.30.3
	I0731 21:03:34.129513  188266 api_server.go:131] duration metric: took 3.888432526s to wait for apiserver health ...
	I0731 21:03:34.129523  188266 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:03:34.129554  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:34.129622  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:34.167751  188266 cri.go:89] found id: "89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:34.167781  188266 cri.go:89] found id: ""
	I0731 21:03:34.167792  188266 logs.go:276] 1 containers: [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718]
	I0731 21:03:34.167860  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.172786  188266 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:34.172858  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:34.212172  188266 cri.go:89] found id: "d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:34.212204  188266 cri.go:89] found id: ""
	I0731 21:03:34.212215  188266 logs.go:276] 1 containers: [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c]
	I0731 21:03:34.212289  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.216651  188266 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:34.216736  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:34.263492  188266 cri.go:89] found id: "987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:34.263515  188266 cri.go:89] found id: ""
	I0731 21:03:34.263528  188266 logs.go:276] 1 containers: [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025]
	I0731 21:03:34.263592  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.268548  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:34.268630  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:34.309420  188266 cri.go:89] found id: "936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:34.309453  188266 cri.go:89] found id: ""
	I0731 21:03:34.309463  188266 logs.go:276] 1 containers: [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447]
	I0731 21:03:34.309529  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.313921  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:34.313993  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:34.354712  188266 cri.go:89] found id: "c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:34.354740  188266 cri.go:89] found id: ""
	I0731 21:03:34.354754  188266 logs.go:276] 1 containers: [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e]
	I0731 21:03:34.354818  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.359363  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:34.359446  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:34.397596  188266 cri.go:89] found id: "c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:34.397622  188266 cri.go:89] found id: ""
	I0731 21:03:34.397634  188266 logs.go:276] 1 containers: [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085]
	I0731 21:03:34.397710  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.402126  188266 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:34.402207  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:34.447198  188266 cri.go:89] found id: ""
	I0731 21:03:34.447234  188266 logs.go:276] 0 containers: []
	W0731 21:03:34.447242  188266 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:34.447248  188266 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:03:34.447304  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:03:34.487429  188266 cri.go:89] found id: "701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:34.487452  188266 cri.go:89] found id: "23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:34.487457  188266 cri.go:89] found id: ""
	I0731 21:03:34.487464  188266 logs.go:276] 2 containers: [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f]
	I0731 21:03:34.487519  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.494362  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.499409  188266 logs.go:123] Gathering logs for etcd [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c] ...
	I0731 21:03:34.499438  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:34.549761  188266 logs.go:123] Gathering logs for kube-proxy [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e] ...
	I0731 21:03:34.549802  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:34.588571  188266 logs.go:123] Gathering logs for kube-controller-manager [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085] ...
	I0731 21:03:34.588603  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:34.646590  188266 logs.go:123] Gathering logs for storage-provisioner [23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f] ...
	I0731 21:03:34.646635  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:34.691320  188266 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:34.691353  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:35.098975  188266 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:35.099018  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:35.153924  188266 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:35.153964  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:35.168091  188266 logs.go:123] Gathering logs for kube-apiserver [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718] ...
	I0731 21:03:35.168121  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:35.214469  188266 logs.go:123] Gathering logs for container status ...
	I0731 21:03:35.214511  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:35.260694  188266 logs.go:123] Gathering logs for storage-provisioner [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173] ...
	I0731 21:03:35.260724  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:35.299230  188266 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:35.299261  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:03:35.413598  188266 logs.go:123] Gathering logs for coredns [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025] ...
	I0731 21:03:35.413635  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:35.451331  188266 logs.go:123] Gathering logs for kube-scheduler [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447] ...
	I0731 21:03:35.451359  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:35.582896  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:35.597483  188656 kubeadm.go:597] duration metric: took 4m3.860422558s to restartPrimaryControlPlane
	W0731 21:03:35.597559  188656 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 21:03:35.597598  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:03:36.054326  188656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:03:36.070199  188656 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:03:36.081882  188656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:03:36.093300  188656 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:03:36.093322  188656 kubeadm.go:157] found existing configuration files:
	
	I0731 21:03:36.093396  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:03:36.103781  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:03:36.103843  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:03:36.114702  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:03:36.125213  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:03:36.125299  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:03:36.136299  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:03:36.146441  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:03:36.146520  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:03:36.157524  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:03:36.168247  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:03:36.168327  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:03:36.178875  188656 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:03:36.253662  188656 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 21:03:36.253804  188656 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:03:36.401385  188656 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:03:36.401550  188656 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:03:36.401686  188656 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:03:36.591601  188656 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:03:34.513632  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:36.515043  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:36.593492  188656 out.go:204]   - Generating certificates and keys ...
	I0731 21:03:36.593604  188656 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:03:36.593690  188656 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:03:36.593817  188656 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:03:36.593907  188656 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:03:36.594011  188656 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:03:36.594090  188656 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:03:36.594215  188656 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:03:36.594602  188656 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:03:36.595122  188656 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:03:36.595323  188656 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:03:36.595414  188656 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:03:36.595548  188656 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:03:37.052958  188656 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:03:37.178980  188656 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:03:37.375085  188656 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:03:37.550735  188656 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:03:37.571991  188656 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:03:37.575050  188656 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:03:37.575227  188656 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:03:37.707194  188656 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:03:37.997696  188266 system_pods.go:59] 8 kube-system pods found
	I0731 21:03:37.997725  188266 system_pods.go:61] "coredns-7db6d8ff4d-gnrgs" [203ddf96-11cf-4fd3-8920-aa787815ad1a] Running
	I0731 21:03:37.997730  188266 system_pods.go:61] "etcd-default-k8s-diff-port-125614" [7b9a74f5-b8df-457d-b4be-26e3f268e74a] Running
	I0731 21:03:37.997734  188266 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-125614" [a2db9a6a-1d7f-4bb6-9f54-2d74676bba7f] Running
	I0731 21:03:37.997738  188266 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-125614" [3cd52038-e450-46ea-91a7-494bdfeb386e] Running
	I0731 21:03:37.997741  188266 system_pods.go:61] "kube-proxy-csdc4" [24077c7d-f54c-4a54-9791-742327f2a9d0] Running
	I0731 21:03:37.997744  188266 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-125614" [4b9b4f87-74a3-4768-b3ca-3226a4db7105] Running
	I0731 21:03:37.997750  188266 system_pods.go:61] "metrics-server-569cc877fc-jf52w" [00b07830-8180-43c0-83c7-e68d399ae0ef] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:03:37.997754  188266 system_pods.go:61] "storage-provisioner" [efc60c19-af1b-426e-82e2-5fb9a2d1fb3a] Running
	I0731 21:03:37.997762  188266 system_pods.go:74] duration metric: took 3.868231958s to wait for pod list to return data ...
	I0731 21:03:37.997773  188266 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:03:38.000640  188266 default_sa.go:45] found service account: "default"
	I0731 21:03:38.000665  188266 default_sa.go:55] duration metric: took 2.88647ms for default service account to be created ...
	I0731 21:03:38.000672  188266 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:03:38.007107  188266 system_pods.go:86] 8 kube-system pods found
	I0731 21:03:38.007132  188266 system_pods.go:89] "coredns-7db6d8ff4d-gnrgs" [203ddf96-11cf-4fd3-8920-aa787815ad1a] Running
	I0731 21:03:38.007137  188266 system_pods.go:89] "etcd-default-k8s-diff-port-125614" [7b9a74f5-b8df-457d-b4be-26e3f268e74a] Running
	I0731 21:03:38.007142  188266 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-125614" [a2db9a6a-1d7f-4bb6-9f54-2d74676bba7f] Running
	I0731 21:03:38.007146  188266 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-125614" [3cd52038-e450-46ea-91a7-494bdfeb386e] Running
	I0731 21:03:38.007152  188266 system_pods.go:89] "kube-proxy-csdc4" [24077c7d-f54c-4a54-9791-742327f2a9d0] Running
	I0731 21:03:38.007158  188266 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-125614" [4b9b4f87-74a3-4768-b3ca-3226a4db7105] Running
	I0731 21:03:38.007164  188266 system_pods.go:89] "metrics-server-569cc877fc-jf52w" [00b07830-8180-43c0-83c7-e68d399ae0ef] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:03:38.007168  188266 system_pods.go:89] "storage-provisioner" [efc60c19-af1b-426e-82e2-5fb9a2d1fb3a] Running
	I0731 21:03:38.007175  188266 system_pods.go:126] duration metric: took 6.498733ms to wait for k8s-apps to be running ...
	I0731 21:03:38.007183  188266 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:03:38.007240  188266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:03:38.026906  188266 system_svc.go:56] duration metric: took 19.708653ms WaitForService to wait for kubelet
	I0731 21:03:38.026938  188266 kubeadm.go:582] duration metric: took 4m21.965767608s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:03:38.026969  188266 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:03:38.030479  188266 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:03:38.030554  188266 node_conditions.go:123] node cpu capacity is 2
	I0731 21:03:38.030577  188266 node_conditions.go:105] duration metric: took 3.601933ms to run NodePressure ...
	I0731 21:03:38.030600  188266 start.go:241] waiting for startup goroutines ...
	I0731 21:03:38.030611  188266 start.go:246] waiting for cluster config update ...
	I0731 21:03:38.030626  188266 start.go:255] writing updated cluster config ...
	I0731 21:03:38.031028  188266 ssh_runner.go:195] Run: rm -f paused
	I0731 21:03:38.082629  188266 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 21:03:38.084590  188266 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-125614" cluster and "default" namespace by default
	I0731 21:03:37.709295  188656 out.go:204]   - Booting up control plane ...
	I0731 21:03:37.709427  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:03:37.722549  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:03:37.723455  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:03:37.724194  188656 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:03:37.726323  188656 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 21:03:39.013773  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:41.016158  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:44.360883  188133 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.27764632s)
	I0731 21:03:44.360955  188133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:03:44.379069  188133 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:03:44.389518  188133 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:03:44.400223  188133 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:03:44.400250  188133 kubeadm.go:157] found existing configuration files:
	
	I0731 21:03:44.400302  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:03:44.410644  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:03:44.410718  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:03:44.421136  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:03:44.431161  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:03:44.431231  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:03:44.441936  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:03:44.451761  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:03:44.451820  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:03:44.462692  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:03:44.472982  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:03:44.473050  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:03:44.482980  188133 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:03:44.532539  188133 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0731 21:03:44.532637  188133 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:03:44.651505  188133 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:03:44.651654  188133 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:03:44.651772  188133 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0731 21:03:44.660564  188133 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:03:44.662559  188133 out.go:204]   - Generating certificates and keys ...
	I0731 21:03:44.662676  188133 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:03:44.662765  188133 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:03:44.662878  188133 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:03:44.662971  188133 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:03:44.663073  188133 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:03:44.663142  188133 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:03:44.663218  188133 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:03:44.663293  188133 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:03:44.663389  188133 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:03:44.663527  188133 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:03:44.663587  188133 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:03:44.663679  188133 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:03:44.813556  188133 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:03:44.908380  188133 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 21:03:45.005215  188133 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:03:45.138446  188133 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:03:45.222892  188133 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:03:45.223622  188133 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:03:45.226748  188133 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:03:43.513039  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:45.513901  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:45.228799  188133 out.go:204]   - Booting up control plane ...
	I0731 21:03:45.228934  188133 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:03:45.229087  188133 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:03:45.230021  188133 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:03:45.249145  188133 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:03:45.258184  188133 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:03:45.258267  188133 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:03:45.392726  188133 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 21:03:45.392852  188133 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 21:03:45.899754  188133 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 506.694095ms
	I0731 21:03:45.899857  188133 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 21:03:51.901713  188133 kubeadm.go:310] [api-check] The API server is healthy after 6.00194457s
	I0731 21:03:51.914947  188133 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 21:03:51.932510  188133 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 21:03:51.971055  188133 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 21:03:51.971273  188133 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-916885 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 21:03:51.985104  188133 kubeadm.go:310] [bootstrap-token] Using token: q86dx8.9ipyjyidvcwogxce
	I0731 21:03:47.515248  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:50.016206  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:51.986447  188133 out.go:204]   - Configuring RBAC rules ...
	I0731 21:03:51.986576  188133 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 21:03:51.993910  188133 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 21:03:52.002474  188133 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 21:03:52.007035  188133 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 21:03:52.011708  188133 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 21:03:52.020500  188133 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 21:03:52.310057  188133 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 21:03:52.778266  188133 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 21:03:53.308425  188133 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 21:03:53.309509  188133 kubeadm.go:310] 
	I0731 21:03:53.309585  188133 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 21:03:53.309597  188133 kubeadm.go:310] 
	I0731 21:03:53.309686  188133 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 21:03:53.309694  188133 kubeadm.go:310] 
	I0731 21:03:53.309715  188133 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 21:03:53.309771  188133 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 21:03:53.309875  188133 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 21:03:53.309894  188133 kubeadm.go:310] 
	I0731 21:03:53.310007  188133 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 21:03:53.310027  188133 kubeadm.go:310] 
	I0731 21:03:53.310088  188133 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 21:03:53.310099  188133 kubeadm.go:310] 
	I0731 21:03:53.310164  188133 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 21:03:53.310275  188133 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 21:03:53.310371  188133 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 21:03:53.310396  188133 kubeadm.go:310] 
	I0731 21:03:53.310499  188133 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 21:03:53.310601  188133 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 21:03:53.310611  188133 kubeadm.go:310] 
	I0731 21:03:53.310735  188133 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token q86dx8.9ipyjyidvcwogxce \
	I0731 21:03:53.310910  188133 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:227a67c5218b331dd5f7132ea28d97c9a8049ca33dc9adbb2077964e102f3559 \
	I0731 21:03:53.310961  188133 kubeadm.go:310] 	--control-plane 
	I0731 21:03:53.310970  188133 kubeadm.go:310] 
	I0731 21:03:53.311078  188133 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 21:03:53.311092  188133 kubeadm.go:310] 
	I0731 21:03:53.311222  188133 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token q86dx8.9ipyjyidvcwogxce \
	I0731 21:03:53.311402  188133 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:227a67c5218b331dd5f7132ea28d97c9a8049ca33dc9adbb2077964e102f3559 
	I0731 21:03:53.312409  188133 kubeadm.go:310] W0731 21:03:44.497219    2941 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0731 21:03:53.312703  188133 kubeadm.go:310] W0731 21:03:44.498106    2941 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0731 21:03:53.312811  188133 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:03:53.312857  188133 cni.go:84] Creating CNI manager for ""
	I0731 21:03:53.312870  188133 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:03:53.315035  188133 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:03:53.316406  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:03:53.327870  188133 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:03:53.352757  188133 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:03:53.352902  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:53.352919  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-916885 minikube.k8s.io/updated_at=2024_07_31T21_03_53_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825 minikube.k8s.io/name=no-preload-916885 minikube.k8s.io/primary=true
	I0731 21:03:53.403275  188133 ops.go:34] apiserver oom_adj: -16
	I0731 21:03:53.579520  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:54.080457  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:54.579898  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:55.080464  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:55.580211  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:56.080518  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:56.579806  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:57.080302  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:57.181987  188133 kubeadm.go:1113] duration metric: took 3.829153755s to wait for elevateKubeSystemPrivileges
	I0731 21:03:57.182024  188133 kubeadm.go:394] duration metric: took 4m59.623631766s to StartCluster
	I0731 21:03:57.182051  188133 settings.go:142] acquiring lock: {Name:mk1f43113b3e416d694056e4890ac819b020378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:03:57.182160  188133 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 21:03:57.185297  188133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:03:57.185586  188133 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.239 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:03:57.185672  188133 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:03:57.185753  188133 addons.go:69] Setting storage-provisioner=true in profile "no-preload-916885"
	I0731 21:03:57.185776  188133 addons.go:69] Setting default-storageclass=true in profile "no-preload-916885"
	I0731 21:03:57.185797  188133 addons.go:69] Setting metrics-server=true in profile "no-preload-916885"
	I0731 21:03:57.185825  188133 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-916885"
	I0731 21:03:57.185844  188133 addons.go:234] Setting addon metrics-server=true in "no-preload-916885"
	W0731 21:03:57.185856  188133 addons.go:243] addon metrics-server should already be in state true
	I0731 21:03:57.185864  188133 config.go:182] Loaded profile config "no-preload-916885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 21:03:57.185889  188133 host.go:66] Checking if "no-preload-916885" exists ...
	I0731 21:03:57.185785  188133 addons.go:234] Setting addon storage-provisioner=true in "no-preload-916885"
	W0731 21:03:57.185926  188133 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:03:57.185956  188133 host.go:66] Checking if "no-preload-916885" exists ...
	I0731 21:03:57.186201  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.186226  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.186247  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.186279  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.186301  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.186345  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.187280  188133 out.go:177] * Verifying Kubernetes components...
	I0731 21:03:57.188864  188133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:03:57.202393  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35433
	I0731 21:03:57.202431  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41921
	I0731 21:03:57.202856  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.202946  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.203416  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.203434  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.203688  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.203707  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.203829  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.204081  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.204270  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetState
	I0731 21:03:57.204428  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.204462  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.204960  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39305
	I0731 21:03:57.205722  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.206275  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.206291  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.208245  188133 addons.go:234] Setting addon default-storageclass=true in "no-preload-916885"
	W0731 21:03:57.208264  188133 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:03:57.208296  188133 host.go:66] Checking if "no-preload-916885" exists ...
	I0731 21:03:57.208640  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.208663  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.208866  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.209432  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.209458  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.222235  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44595
	I0731 21:03:57.222835  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.223408  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.223429  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.224137  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.224366  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetState
	I0731 21:03:57.226564  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 21:03:57.227398  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
	I0731 21:03:57.227842  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.228377  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.228399  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.228427  188133 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:03:57.228836  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.229521  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.229573  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.230036  188133 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:03:57.230056  188133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:03:57.230075  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 21:03:57.230207  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46451
	I0731 21:03:57.230601  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.230993  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.231008  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.231323  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.231519  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetState
	I0731 21:03:57.233542  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 21:03:57.235239  188133 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 21:03:52.514632  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:55.014017  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:57.235631  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.236081  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 21:03:57.236105  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.236374  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 21:03:57.236478  188133 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:03:57.236493  188133 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:03:57.236510  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 21:03:57.236545  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 21:03:57.236711  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 21:03:57.236824  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 21:03:57.238988  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.239335  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 21:03:57.239361  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.239482  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 21:03:57.239645  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 21:03:57.239775  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 21:03:57.239902  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 21:03:57.252386  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40865
	I0731 21:03:57.252846  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.253454  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.253474  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.253837  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.254048  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetState
	I0731 21:03:57.255784  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 21:03:57.256020  188133 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:03:57.256037  188133 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:03:57.256057  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 21:03:57.258870  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.259220  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 21:03:57.259254  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.259446  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 21:03:57.259612  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 21:03:57.259783  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 21:03:57.259940  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 21:03:57.405243  188133 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:03:57.426852  188133 node_ready.go:35] waiting up to 6m0s for node "no-preload-916885" to be "Ready" ...
	I0731 21:03:57.494325  188133 node_ready.go:49] node "no-preload-916885" has status "Ready":"True"
	I0731 21:03:57.494352  188133 node_ready.go:38] duration metric: took 67.471516ms for node "no-preload-916885" to be "Ready" ...
	I0731 21:03:57.494365  188133 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:03:57.497819  188133 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:03:57.497849  188133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 21:03:57.528118  188133 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:03:57.528148  188133 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:03:57.557889  188133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:03:57.568872  188133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:03:57.583099  188133 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-9qnjq" in "kube-system" namespace to be "Ready" ...
	I0731 21:03:57.587315  188133 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:03:57.587342  188133 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:03:57.645504  188133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:03:58.515635  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.515650  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.515667  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.515675  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.516054  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.516100  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.516117  188133 main.go:141] libmachine: (no-preload-916885) DBG | Closing plugin on server side
	I0731 21:03:58.516128  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.516128  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.516161  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.516187  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.516141  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.516213  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.516097  188133 main.go:141] libmachine: (no-preload-916885) DBG | Closing plugin on server side
	I0731 21:03:58.516431  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.516444  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.517889  188133 main.go:141] libmachine: (no-preload-916885) DBG | Closing plugin on server side
	I0731 21:03:58.517914  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.517930  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.569097  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.569120  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.569463  188133 main.go:141] libmachine: (no-preload-916885) DBG | Closing plugin on server side
	I0731 21:03:58.569511  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.569520  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.726076  188133 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.080526254s)
	I0731 21:03:58.726140  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.726153  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.726469  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.726490  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.726501  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.726514  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.728603  188133 main.go:141] libmachine: (no-preload-916885) DBG | Closing plugin on server side
	I0731 21:03:58.728666  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.728688  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.728715  188133 addons.go:475] Verifying addon metrics-server=true in "no-preload-916885"
	I0731 21:03:58.730520  188133 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 21:03:58.731823  188133 addons.go:510] duration metric: took 1.546157188s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 21:03:57.515366  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:59.515730  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:02.013803  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:59.593082  188133 pod_ready.go:102] pod "coredns-5cfdc65f69-9qnjq" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:00.589165  188133 pod_ready.go:92] pod "coredns-5cfdc65f69-9qnjq" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:00.589192  188133 pod_ready.go:81] duration metric: took 3.00606369s for pod "coredns-5cfdc65f69-9qnjq" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:00.589204  188133 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-bqgfg" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:02.597316  188133 pod_ready.go:102] pod "coredns-5cfdc65f69-bqgfg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:05.096168  188133 pod_ready.go:102] pod "coredns-5cfdc65f69-bqgfg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:05.597832  188133 pod_ready.go:92] pod "coredns-5cfdc65f69-bqgfg" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.597857  188133 pod_ready.go:81] duration metric: took 5.008646335s for pod "coredns-5cfdc65f69-bqgfg" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.597866  188133 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.603105  188133 pod_ready.go:92] pod "etcd-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.603128  188133 pod_ready.go:81] duration metric: took 5.254251ms for pod "etcd-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.603140  188133 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.610748  188133 pod_ready.go:92] pod "kube-apiserver-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.610771  188133 pod_ready.go:81] duration metric: took 7.623438ms for pod "kube-apiserver-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.610782  188133 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.615949  188133 pod_ready.go:92] pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.615966  188133 pod_ready.go:81] duration metric: took 5.176213ms for pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.615975  188133 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b4h2z" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.620431  188133 pod_ready.go:92] pod "kube-proxy-b4h2z" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.620450  188133 pod_ready.go:81] duration metric: took 4.469258ms for pod "kube-proxy-b4h2z" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.620458  188133 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.993080  188133 pod_ready.go:92] pod "kube-scheduler-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.993104  188133 pod_ready.go:81] duration metric: took 372.640001ms for pod "kube-scheduler-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.993112  188133 pod_ready.go:38] duration metric: took 8.498733061s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:04:05.993125  188133 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:04:05.993186  188133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:04:06.009952  188133 api_server.go:72] duration metric: took 8.824325154s to wait for apiserver process to appear ...
	I0731 21:04:06.009981  188133 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:04:06.010001  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 21:04:06.014715  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 200:
	ok
	I0731 21:04:06.015917  188133 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 21:04:06.015944  188133 api_server.go:131] duration metric: took 5.952931ms to wait for apiserver health ...
	I0731 21:04:06.015954  188133 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:04:06.196874  188133 system_pods.go:59] 9 kube-system pods found
	I0731 21:04:06.196907  188133 system_pods.go:61] "coredns-5cfdc65f69-9qnjq" [2350f15d-0e3d-429f-a21f-8cbd41407d7e] Running
	I0731 21:04:06.196914  188133 system_pods.go:61] "coredns-5cfdc65f69-bqgfg" [9010990b-36d5-4c0d-adc9-5d9483bd5d44] Running
	I0731 21:04:06.196918  188133 system_pods.go:61] "etcd-no-preload-916885" [951e730b-b153-4f75-9f7f-82d774e01853] Running
	I0731 21:04:06.196923  188133 system_pods.go:61] "kube-apiserver-no-preload-916885" [c53d3e94-2b2d-4ad5-a0a2-54c519a4c907] Running
	I0731 21:04:06.196929  188133 system_pods.go:61] "kube-controller-manager-no-preload-916885" [8de7eaf4-d6e7-41dc-a206-645821682ab2] Running
	I0731 21:04:06.196933  188133 system_pods.go:61] "kube-proxy-b4h2z" [328ebd98-accf-43da-ae60-40fc93f34116] Running
	I0731 21:04:06.196938  188133 system_pods.go:61] "kube-scheduler-no-preload-916885" [e6d18e4c-8e0d-4332-8fc3-2696261447ac] Running
	I0731 21:04:06.196945  188133 system_pods.go:61] "metrics-server-78fcd8795b-86m8h" [3c4df12a-3d52-48dc-9998-587565d13dca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:04:06.196950  188133 system_pods.go:61] "storage-provisioner" [6bfc781b-1370-4460-8018-a1279e37b39d] Running
	I0731 21:04:06.196960  188133 system_pods.go:74] duration metric: took 180.999269ms to wait for pod list to return data ...
	I0731 21:04:06.196970  188133 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:04:06.394499  188133 default_sa.go:45] found service account: "default"
	I0731 21:04:06.394530  188133 default_sa.go:55] duration metric: took 197.552628ms for default service account to be created ...
	I0731 21:04:06.394539  188133 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:04:06.598314  188133 system_pods.go:86] 9 kube-system pods found
	I0731 21:04:06.598345  188133 system_pods.go:89] "coredns-5cfdc65f69-9qnjq" [2350f15d-0e3d-429f-a21f-8cbd41407d7e] Running
	I0731 21:04:06.598354  188133 system_pods.go:89] "coredns-5cfdc65f69-bqgfg" [9010990b-36d5-4c0d-adc9-5d9483bd5d44] Running
	I0731 21:04:06.598361  188133 system_pods.go:89] "etcd-no-preload-916885" [951e730b-b153-4f75-9f7f-82d774e01853] Running
	I0731 21:04:06.598370  188133 system_pods.go:89] "kube-apiserver-no-preload-916885" [c53d3e94-2b2d-4ad5-a0a2-54c519a4c907] Running
	I0731 21:04:06.598376  188133 system_pods.go:89] "kube-controller-manager-no-preload-916885" [8de7eaf4-d6e7-41dc-a206-645821682ab2] Running
	I0731 21:04:06.598389  188133 system_pods.go:89] "kube-proxy-b4h2z" [328ebd98-accf-43da-ae60-40fc93f34116] Running
	I0731 21:04:06.598397  188133 system_pods.go:89] "kube-scheduler-no-preload-916885" [e6d18e4c-8e0d-4332-8fc3-2696261447ac] Running
	I0731 21:04:06.598408  188133 system_pods.go:89] "metrics-server-78fcd8795b-86m8h" [3c4df12a-3d52-48dc-9998-587565d13dca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:04:06.598419  188133 system_pods.go:89] "storage-provisioner" [6bfc781b-1370-4460-8018-a1279e37b39d] Running
	I0731 21:04:06.598430  188133 system_pods.go:126] duration metric: took 203.884264ms to wait for k8s-apps to be running ...
	I0731 21:04:06.598442  188133 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:04:06.598498  188133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:04:06.613642  188133 system_svc.go:56] duration metric: took 15.190132ms WaitForService to wait for kubelet
	I0731 21:04:06.613675  188133 kubeadm.go:582] duration metric: took 9.4280531s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:04:06.613705  188133 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:04:06.794163  188133 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:04:06.794191  188133 node_conditions.go:123] node cpu capacity is 2
	I0731 21:04:06.794204  188133 node_conditions.go:105] duration metric: took 180.492992ms to run NodePressure ...
	I0731 21:04:06.794218  188133 start.go:241] waiting for startup goroutines ...
	I0731 21:04:06.794227  188133 start.go:246] waiting for cluster config update ...
	I0731 21:04:06.794239  188133 start.go:255] writing updated cluster config ...
	I0731 21:04:06.794547  188133 ssh_runner.go:195] Run: rm -f paused
	I0731 21:04:06.844118  188133 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0731 21:04:06.846234  188133 out.go:177] * Done! kubectl is now configured to use "no-preload-916885" cluster and "default" namespace by default
	I0731 21:04:04.015079  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:06.514907  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:08.514958  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:11.014341  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:13.514956  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:14.014985  187862 pod_ready.go:81] duration metric: took 4m0.007784922s for pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace to be "Ready" ...
	E0731 21:04:14.015013  187862 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0731 21:04:14.015020  187862 pod_ready.go:38] duration metric: took 4m6.056814749s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:04:14.015034  187862 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:04:14.015079  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:04:14.015127  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:04:14.086254  187862 cri.go:89] found id: "dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:14.086283  187862 cri.go:89] found id: ""
	I0731 21:04:14.086293  187862 logs.go:276] 1 containers: [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473]
	I0731 21:04:14.086368  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.091267  187862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:04:14.091334  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:04:14.138577  187862 cri.go:89] found id: "7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:14.138613  187862 cri.go:89] found id: ""
	I0731 21:04:14.138624  187862 logs.go:276] 1 containers: [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e]
	I0731 21:04:14.138696  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.143245  187862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:04:14.143315  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:04:14.182295  187862 cri.go:89] found id: "1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:14.182325  187862 cri.go:89] found id: ""
	I0731 21:04:14.182336  187862 logs.go:276] 1 containers: [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084]
	I0731 21:04:14.182400  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.186861  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:04:14.186936  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:04:14.230524  187862 cri.go:89] found id: "3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:14.230547  187862 cri.go:89] found id: ""
	I0731 21:04:14.230555  187862 logs.go:276] 1 containers: [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5]
	I0731 21:04:14.230609  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.235285  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:04:14.235354  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:04:14.279188  187862 cri.go:89] found id: "b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:14.279209  187862 cri.go:89] found id: ""
	I0731 21:04:14.279217  187862 logs.go:276] 1 containers: [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845]
	I0731 21:04:14.279268  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.284280  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:04:14.284362  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:04:14.333736  187862 cri.go:89] found id: "0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:14.333764  187862 cri.go:89] found id: ""
	I0731 21:04:14.333774  187862 logs.go:276] 1 containers: [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f]
	I0731 21:04:14.333830  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.338652  187862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:04:14.338717  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:04:14.380632  187862 cri.go:89] found id: ""
	I0731 21:04:14.380663  187862 logs.go:276] 0 containers: []
	W0731 21:04:14.380672  187862 logs.go:278] No container was found matching "kindnet"
	I0731 21:04:14.380678  187862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:04:14.380747  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:04:14.424705  187862 cri.go:89] found id: "919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:14.424727  187862 cri.go:89] found id: "c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:14.424732  187862 cri.go:89] found id: ""
	I0731 21:04:14.424741  187862 logs.go:276] 2 containers: [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2]
	I0731 21:04:14.424801  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.429310  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.434243  187862 logs.go:123] Gathering logs for kubelet ...
	I0731 21:04:14.434267  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:04:14.490743  187862 logs.go:123] Gathering logs for etcd [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e] ...
	I0731 21:04:14.490782  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:14.536575  187862 logs.go:123] Gathering logs for coredns [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084] ...
	I0731 21:04:14.536613  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:14.585952  187862 logs.go:123] Gathering logs for kube-proxy [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845] ...
	I0731 21:04:14.585986  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:14.626198  187862 logs.go:123] Gathering logs for container status ...
	I0731 21:04:14.626228  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:04:14.672674  187862 logs.go:123] Gathering logs for storage-provisioner [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb] ...
	I0731 21:04:14.672712  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:14.711759  187862 logs.go:123] Gathering logs for storage-provisioner [c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2] ...
	I0731 21:04:14.711788  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:14.757020  187862 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:04:14.757047  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:04:15.286344  187862 logs.go:123] Gathering logs for dmesg ...
	I0731 21:04:15.286393  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:04:15.301933  187862 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:04:15.301969  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:04:15.451532  187862 logs.go:123] Gathering logs for kube-apiserver [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473] ...
	I0731 21:04:15.451566  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:15.502398  187862 logs.go:123] Gathering logs for kube-scheduler [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5] ...
	I0731 21:04:15.502443  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:15.544678  187862 logs.go:123] Gathering logs for kube-controller-manager [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f] ...
	I0731 21:04:15.544719  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:17.729291  188656 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 21:04:17.730290  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:04:17.730512  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:04:18.104050  187862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:04:18.121028  187862 api_server.go:72] duration metric: took 4m17.382743031s to wait for apiserver process to appear ...
	I0731 21:04:18.121061  187862 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:04:18.121109  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:04:18.121179  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:04:18.165472  187862 cri.go:89] found id: "dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:18.165498  187862 cri.go:89] found id: ""
	I0731 21:04:18.165507  187862 logs.go:276] 1 containers: [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473]
	I0731 21:04:18.165559  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.169592  187862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:04:18.169663  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:04:18.216918  187862 cri.go:89] found id: "7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:18.216942  187862 cri.go:89] found id: ""
	I0731 21:04:18.216951  187862 logs.go:276] 1 containers: [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e]
	I0731 21:04:18.217015  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.221467  187862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:04:18.221546  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:04:18.267066  187862 cri.go:89] found id: "1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:18.267089  187862 cri.go:89] found id: ""
	I0731 21:04:18.267098  187862 logs.go:276] 1 containers: [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084]
	I0731 21:04:18.267164  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.271583  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:04:18.271662  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:04:18.316381  187862 cri.go:89] found id: "3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:18.316404  187862 cri.go:89] found id: ""
	I0731 21:04:18.316412  187862 logs.go:276] 1 containers: [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5]
	I0731 21:04:18.316472  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.320859  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:04:18.320932  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:04:18.365366  187862 cri.go:89] found id: "b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:18.365396  187862 cri.go:89] found id: ""
	I0731 21:04:18.365410  187862 logs.go:276] 1 containers: [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845]
	I0731 21:04:18.365476  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.369933  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:04:18.370019  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:04:18.411121  187862 cri.go:89] found id: "0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:18.411143  187862 cri.go:89] found id: ""
	I0731 21:04:18.411152  187862 logs.go:276] 1 containers: [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f]
	I0731 21:04:18.411203  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.415493  187862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:04:18.415561  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:04:18.453040  187862 cri.go:89] found id: ""
	I0731 21:04:18.453069  187862 logs.go:276] 0 containers: []
	W0731 21:04:18.453078  187862 logs.go:278] No container was found matching "kindnet"
	I0731 21:04:18.453085  187862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:04:18.453153  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:04:18.499335  187862 cri.go:89] found id: "919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:18.499359  187862 cri.go:89] found id: "c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:18.499363  187862 cri.go:89] found id: ""
	I0731 21:04:18.499371  187862 logs.go:276] 2 containers: [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2]
	I0731 21:04:18.499446  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.504353  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.508619  187862 logs.go:123] Gathering logs for kubelet ...
	I0731 21:04:18.508640  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:04:18.562692  187862 logs.go:123] Gathering logs for kube-apiserver [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473] ...
	I0731 21:04:18.562732  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:18.623405  187862 logs.go:123] Gathering logs for coredns [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084] ...
	I0731 21:04:18.623446  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:18.679472  187862 logs.go:123] Gathering logs for kube-scheduler [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5] ...
	I0731 21:04:18.679510  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:18.728893  187862 logs.go:123] Gathering logs for storage-provisioner [c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2] ...
	I0731 21:04:18.728933  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:18.770963  187862 logs.go:123] Gathering logs for container status ...
	I0731 21:04:18.770994  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:04:18.819353  187862 logs.go:123] Gathering logs for dmesg ...
	I0731 21:04:18.819385  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:04:18.835654  187862 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:04:18.835684  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:04:18.947479  187862 logs.go:123] Gathering logs for etcd [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e] ...
	I0731 21:04:18.947516  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:18.995005  187862 logs.go:123] Gathering logs for kube-proxy [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845] ...
	I0731 21:04:18.995043  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:19.033246  187862 logs.go:123] Gathering logs for kube-controller-manager [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f] ...
	I0731 21:04:19.033274  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:19.092703  187862 logs.go:123] Gathering logs for storage-provisioner [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb] ...
	I0731 21:04:19.092740  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:19.129738  187862 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:04:19.129769  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:04:22.058935  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 21:04:22.063496  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 200:
	ok
	I0731 21:04:22.064670  187862 api_server.go:141] control plane version: v1.30.3
	I0731 21:04:22.064690  187862 api_server.go:131] duration metric: took 3.943623055s to wait for apiserver health ...
	I0731 21:04:22.064699  187862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:04:22.064721  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:04:22.064771  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:04:22.103710  187862 cri.go:89] found id: "dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:22.103733  187862 cri.go:89] found id: ""
	I0731 21:04:22.103741  187862 logs.go:276] 1 containers: [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473]
	I0731 21:04:22.103798  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.108133  187862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:04:22.108203  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:04:22.159120  187862 cri.go:89] found id: "7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:22.159145  187862 cri.go:89] found id: ""
	I0731 21:04:22.159155  187862 logs.go:276] 1 containers: [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e]
	I0731 21:04:22.159213  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.165107  187862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:04:22.165169  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:04:22.202426  187862 cri.go:89] found id: "1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:22.202454  187862 cri.go:89] found id: ""
	I0731 21:04:22.202464  187862 logs.go:276] 1 containers: [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084]
	I0731 21:04:22.202524  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.206785  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:04:22.206842  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:04:22.245008  187862 cri.go:89] found id: "3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:22.245039  187862 cri.go:89] found id: ""
	I0731 21:04:22.245050  187862 logs.go:276] 1 containers: [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5]
	I0731 21:04:22.245111  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.249467  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:04:22.249548  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:04:22.731353  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:04:22.731627  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:04:22.298105  187862 cri.go:89] found id: "b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:22.298135  187862 cri.go:89] found id: ""
	I0731 21:04:22.298145  187862 logs.go:276] 1 containers: [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845]
	I0731 21:04:22.298209  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.302845  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:04:22.302902  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:04:22.346868  187862 cri.go:89] found id: "0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:22.346898  187862 cri.go:89] found id: ""
	I0731 21:04:22.346909  187862 logs.go:276] 1 containers: [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f]
	I0731 21:04:22.346978  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.351246  187862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:04:22.351313  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:04:22.389698  187862 cri.go:89] found id: ""
	I0731 21:04:22.389730  187862 logs.go:276] 0 containers: []
	W0731 21:04:22.389742  187862 logs.go:278] No container was found matching "kindnet"
	I0731 21:04:22.389751  187862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:04:22.389817  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:04:22.425212  187862 cri.go:89] found id: "919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:22.425234  187862 cri.go:89] found id: "c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:22.425238  187862 cri.go:89] found id: ""
	I0731 21:04:22.425245  187862 logs.go:276] 2 containers: [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2]
	I0731 21:04:22.425298  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.429584  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.433471  187862 logs.go:123] Gathering logs for kube-controller-manager [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f] ...
	I0731 21:04:22.433496  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:22.490354  187862 logs.go:123] Gathering logs for storage-provisioner [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb] ...
	I0731 21:04:22.490390  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:22.530117  187862 logs.go:123] Gathering logs for dmesg ...
	I0731 21:04:22.530146  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:04:22.545249  187862 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:04:22.545281  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:04:22.658074  187862 logs.go:123] Gathering logs for kube-apiserver [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473] ...
	I0731 21:04:22.658115  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:22.711537  187862 logs.go:123] Gathering logs for etcd [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e] ...
	I0731 21:04:22.711573  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:22.758644  187862 logs.go:123] Gathering logs for coredns [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084] ...
	I0731 21:04:22.758685  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:22.796716  187862 logs.go:123] Gathering logs for kube-scheduler [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5] ...
	I0731 21:04:22.796751  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:22.843502  187862 logs.go:123] Gathering logs for storage-provisioner [c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2] ...
	I0731 21:04:22.843538  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:22.881738  187862 logs.go:123] Gathering logs for kubelet ...
	I0731 21:04:22.881765  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:04:22.936317  187862 logs.go:123] Gathering logs for kube-proxy [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845] ...
	I0731 21:04:22.936360  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:22.977562  187862 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:04:22.977592  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:04:23.354873  187862 logs.go:123] Gathering logs for container status ...
	I0731 21:04:23.354921  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:04:25.917553  187862 system_pods.go:59] 8 kube-system pods found
	I0731 21:04:25.917588  187862 system_pods.go:61] "coredns-7db6d8ff4d-2ks55" [f5ad9d76-5cdc-430e-8933-7e72a2dda95f] Running
	I0731 21:04:25.917593  187862 system_pods.go:61] "etcd-embed-certs-831240" [5236ad06-90d9-48f1-964a-efa8f56ee8b5] Running
	I0731 21:04:25.917597  187862 system_pods.go:61] "kube-apiserver-embed-certs-831240" [06290f48-0a7b-4d88-9e61-af4c7dde9acd] Running
	I0731 21:04:25.917601  187862 system_pods.go:61] "kube-controller-manager-embed-certs-831240" [5d669fab-16cd-4bc6-a880-a1ecfb5d55b2] Running
	I0731 21:04:25.917604  187862 system_pods.go:61] "kube-proxy-x662j" [9ad0d8a8-94b4-4f3e-b5da-4e5585c28d21] Running
	I0731 21:04:25.917608  187862 system_pods.go:61] "kube-scheduler-embed-certs-831240" [cf9c5922-6468-46e7-84c1-1841f5bc3446] Running
	I0731 21:04:25.917614  187862 system_pods.go:61] "metrics-server-569cc877fc-slbkm" [f93f674b-1f0e-443b-ac06-9c2a5234eeea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:04:25.917624  187862 system_pods.go:61] "storage-provisioner" [d3d5fa24-96e8-4ab5-9887-62ff8b82f21d] Running
	I0731 21:04:25.917635  187862 system_pods.go:74] duration metric: took 3.852929636s to wait for pod list to return data ...
	I0731 21:04:25.917649  187862 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:04:25.920234  187862 default_sa.go:45] found service account: "default"
	I0731 21:04:25.920256  187862 default_sa.go:55] duration metric: took 2.600194ms for default service account to be created ...
	I0731 21:04:25.920264  187862 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:04:25.926296  187862 system_pods.go:86] 8 kube-system pods found
	I0731 21:04:25.926325  187862 system_pods.go:89] "coredns-7db6d8ff4d-2ks55" [f5ad9d76-5cdc-430e-8933-7e72a2dda95f] Running
	I0731 21:04:25.926330  187862 system_pods.go:89] "etcd-embed-certs-831240" [5236ad06-90d9-48f1-964a-efa8f56ee8b5] Running
	I0731 21:04:25.926334  187862 system_pods.go:89] "kube-apiserver-embed-certs-831240" [06290f48-0a7b-4d88-9e61-af4c7dde9acd] Running
	I0731 21:04:25.926338  187862 system_pods.go:89] "kube-controller-manager-embed-certs-831240" [5d669fab-16cd-4bc6-a880-a1ecfb5d55b2] Running
	I0731 21:04:25.926342  187862 system_pods.go:89] "kube-proxy-x662j" [9ad0d8a8-94b4-4f3e-b5da-4e5585c28d21] Running
	I0731 21:04:25.926346  187862 system_pods.go:89] "kube-scheduler-embed-certs-831240" [cf9c5922-6468-46e7-84c1-1841f5bc3446] Running
	I0731 21:04:25.926352  187862 system_pods.go:89] "metrics-server-569cc877fc-slbkm" [f93f674b-1f0e-443b-ac06-9c2a5234eeea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:04:25.926356  187862 system_pods.go:89] "storage-provisioner" [d3d5fa24-96e8-4ab5-9887-62ff8b82f21d] Running
	I0731 21:04:25.926365  187862 system_pods.go:126] duration metric: took 6.094538ms to wait for k8s-apps to be running ...
	I0731 21:04:25.926373  187862 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:04:25.926433  187862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:04:25.945225  187862 system_svc.go:56] duration metric: took 18.837835ms WaitForService to wait for kubelet
	I0731 21:04:25.945264  187862 kubeadm.go:582] duration metric: took 4m25.206984451s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:04:25.945294  187862 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:04:25.948480  187862 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:04:25.948506  187862 node_conditions.go:123] node cpu capacity is 2
	I0731 21:04:25.948520  187862 node_conditions.go:105] duration metric: took 3.219175ms to run NodePressure ...
	I0731 21:04:25.948535  187862 start.go:241] waiting for startup goroutines ...
	I0731 21:04:25.948543  187862 start.go:246] waiting for cluster config update ...
	I0731 21:04:25.948556  187862 start.go:255] writing updated cluster config ...
	I0731 21:04:25.949317  187862 ssh_runner.go:195] Run: rm -f paused
	I0731 21:04:26.000525  187862 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 21:04:26.002719  187862 out.go:177] * Done! kubectl is now configured to use "embed-certs-831240" cluster and "default" namespace by default
	I0731 21:04:32.732572  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:04:32.732835  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:04:52.734257  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:04:52.734530  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:05:32.739465  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:05:32.739778  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:05:32.739796  188656 kubeadm.go:310] 
	I0731 21:05:32.739854  188656 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 21:05:32.739962  188656 kubeadm.go:310] 		timed out waiting for the condition
	I0731 21:05:32.739988  188656 kubeadm.go:310] 
	I0731 21:05:32.740034  188656 kubeadm.go:310] 	This error is likely caused by:
	I0731 21:05:32.740083  188656 kubeadm.go:310] 		- The kubelet is not running
	I0731 21:05:32.740230  188656 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 21:05:32.740245  188656 kubeadm.go:310] 
	I0731 21:05:32.740393  188656 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 21:05:32.740441  188656 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 21:05:32.740485  188656 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 21:05:32.740494  188656 kubeadm.go:310] 
	I0731 21:05:32.740624  188656 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 21:05:32.740741  188656 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 21:05:32.740752  188656 kubeadm.go:310] 
	I0731 21:05:32.740888  188656 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 21:05:32.741008  188656 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 21:05:32.741084  188656 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 21:05:32.741145  188656 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 21:05:32.741152  188656 kubeadm.go:310] 
	I0731 21:05:32.741834  188656 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:05:32.741967  188656 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 21:05:32.742066  188656 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0731 21:05:32.742264  188656 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0731 21:05:32.742340  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:05:33.227380  188656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:05:33.243864  188656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:05:33.254208  188656 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:05:33.254234  188656 kubeadm.go:157] found existing configuration files:
	
	I0731 21:05:33.254313  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:05:33.264766  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:05:33.264846  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:05:33.275517  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:05:33.286281  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:05:33.286358  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:05:33.297108  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:05:33.307555  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:05:33.307627  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:05:33.318193  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:05:33.328155  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:05:33.328220  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:05:33.338088  188656 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:05:33.569897  188656 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:07:29.725230  188656 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 21:07:29.725381  188656 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 21:07:29.726868  188656 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 21:07:29.726959  188656 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:07:29.727064  188656 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:07:29.727204  188656 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:07:29.727322  188656 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:07:29.727389  188656 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:07:29.729525  188656 out.go:204]   - Generating certificates and keys ...
	I0731 21:07:29.729659  188656 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:07:29.729761  188656 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:07:29.729918  188656 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:07:29.730026  188656 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:07:29.730126  188656 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:07:29.730268  188656 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:07:29.730369  188656 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:07:29.730461  188656 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:07:29.730555  188656 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:07:29.730658  188656 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:07:29.730713  188656 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:07:29.730790  188656 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:07:29.730856  188656 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:07:29.730931  188656 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:07:29.731014  188656 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:07:29.731111  188656 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:07:29.731248  188656 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:07:29.731339  188656 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:07:29.731395  188656 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:07:29.731486  188656 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:07:29.733052  188656 out.go:204]   - Booting up control plane ...
	I0731 21:07:29.733146  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:07:29.733226  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:07:29.733305  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:07:29.733454  188656 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:07:29.733656  188656 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 21:07:29.733735  188656 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 21:07:29.733830  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.734048  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.734116  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.734275  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.734331  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.734543  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.734642  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.734868  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.734966  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.735234  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.735252  188656 kubeadm.go:310] 
	I0731 21:07:29.735313  188656 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 21:07:29.735376  188656 kubeadm.go:310] 		timed out waiting for the condition
	I0731 21:07:29.735385  188656 kubeadm.go:310] 
	I0731 21:07:29.735432  188656 kubeadm.go:310] 	This error is likely caused by:
	I0731 21:07:29.735480  188656 kubeadm.go:310] 		- The kubelet is not running
	I0731 21:07:29.735624  188656 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 21:07:29.735634  188656 kubeadm.go:310] 
	I0731 21:07:29.735779  188656 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 21:07:29.735830  188656 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 21:07:29.735879  188656 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 21:07:29.735889  188656 kubeadm.go:310] 
	I0731 21:07:29.736038  188656 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 21:07:29.736129  188656 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 21:07:29.736141  188656 kubeadm.go:310] 
	I0731 21:07:29.736241  188656 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 21:07:29.736315  188656 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 21:07:29.736400  188656 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 21:07:29.736480  188656 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 21:07:29.736537  188656 kubeadm.go:310] 
	I0731 21:07:29.736579  188656 kubeadm.go:394] duration metric: took 7m58.053099483s to StartCluster
	I0731 21:07:29.736660  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:07:29.736793  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:07:29.802897  188656 cri.go:89] found id: ""
	I0731 21:07:29.802932  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.802945  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:07:29.802953  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:07:29.803021  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:07:29.840059  188656 cri.go:89] found id: ""
	I0731 21:07:29.840088  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.840098  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:07:29.840106  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:07:29.840178  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:07:29.881030  188656 cri.go:89] found id: ""
	I0731 21:07:29.881058  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.881066  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:07:29.881073  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:07:29.881150  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:07:29.923495  188656 cri.go:89] found id: ""
	I0731 21:07:29.923524  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.923532  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:07:29.923538  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:07:29.923604  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:07:29.966128  188656 cri.go:89] found id: ""
	I0731 21:07:29.966156  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.966164  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:07:29.966171  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:07:29.966236  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:07:30.007648  188656 cri.go:89] found id: ""
	I0731 21:07:30.007678  188656 logs.go:276] 0 containers: []
	W0731 21:07:30.007687  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:07:30.007693  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:07:30.007748  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:07:30.047857  188656 cri.go:89] found id: ""
	I0731 21:07:30.047887  188656 logs.go:276] 0 containers: []
	W0731 21:07:30.047903  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:07:30.047909  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:07:30.047959  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:07:30.087245  188656 cri.go:89] found id: ""
	I0731 21:07:30.087275  188656 logs.go:276] 0 containers: []
	W0731 21:07:30.087283  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:07:30.087294  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:07:30.087308  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:07:30.168205  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:07:30.168235  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:07:30.168256  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:07:30.276908  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:07:30.276951  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:07:30.322993  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:07:30.323030  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:07:30.375237  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:07:30.375287  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0731 21:07:30.392523  188656 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0731 21:07:30.392579  188656 out.go:239] * 
	W0731 21:07:30.392653  188656 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:07:30.392683  188656 out.go:239] * 
	W0731 21:07:30.393845  188656 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 21:07:30.397498  188656 out.go:177] 
	W0731 21:07:30.398890  188656 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:07:30.398959  188656 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0731 21:07:30.398995  188656 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0731 21:07:30.401295  188656 out.go:177] 
	
	
	==> CRI-O <==
	Jul 31 21:16:35 old-k8s-version-239115 crio[646]: time="2024-07-31 21:16:35.860540959Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460595860511706,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b54a52f0-6ee9-47c5-8db9-a6e952a504ff name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:16:35 old-k8s-version-239115 crio[646]: time="2024-07-31 21:16:35.861391371Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d82f0de3-97ca-4a34-be5f-f2b46dd6e14a name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:16:35 old-k8s-version-239115 crio[646]: time="2024-07-31 21:16:35.861468028Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d82f0de3-97ca-4a34-be5f-f2b46dd6e14a name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:16:35 old-k8s-version-239115 crio[646]: time="2024-07-31 21:16:35.861506879Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d82f0de3-97ca-4a34-be5f-f2b46dd6e14a name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:16:35 old-k8s-version-239115 crio[646]: time="2024-07-31 21:16:35.895349544Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=76289e94-9573-4137-a571-86207bd83fd9 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:16:35 old-k8s-version-239115 crio[646]: time="2024-07-31 21:16:35.895436011Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=76289e94-9573-4137-a571-86207bd83fd9 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:16:35 old-k8s-version-239115 crio[646]: time="2024-07-31 21:16:35.897059674Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1e3498e0-9838-4cc2-bb48-37d37a550043 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:16:35 old-k8s-version-239115 crio[646]: time="2024-07-31 21:16:35.897573674Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460595897547694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1e3498e0-9838-4cc2-bb48-37d37a550043 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:16:35 old-k8s-version-239115 crio[646]: time="2024-07-31 21:16:35.898334607Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ce5b0b8d-d726-4c9d-98d3-c4a9a85d2e2b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:16:35 old-k8s-version-239115 crio[646]: time="2024-07-31 21:16:35.898437310Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ce5b0b8d-d726-4c9d-98d3-c4a9a85d2e2b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:16:35 old-k8s-version-239115 crio[646]: time="2024-07-31 21:16:35.898486068Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ce5b0b8d-d726-4c9d-98d3-c4a9a85d2e2b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:16:35 old-k8s-version-239115 crio[646]: time="2024-07-31 21:16:35.932194827Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5d8e8582-2d38-47e1-8ffc-8108a1d0f004 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:16:35 old-k8s-version-239115 crio[646]: time="2024-07-31 21:16:35.932321309Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5d8e8582-2d38-47e1-8ffc-8108a1d0f004 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:16:35 old-k8s-version-239115 crio[646]: time="2024-07-31 21:16:35.933444974Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=23478af8-ad92-4db4-8148-9f053e3e0247 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:16:35 old-k8s-version-239115 crio[646]: time="2024-07-31 21:16:35.933849844Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460595933811816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=23478af8-ad92-4db4-8148-9f053e3e0247 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:16:35 old-k8s-version-239115 crio[646]: time="2024-07-31 21:16:35.934599623Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d45917c8-5b6f-44a3-ac66-4dc53bd8752b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:16:35 old-k8s-version-239115 crio[646]: time="2024-07-31 21:16:35.934651399Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d45917c8-5b6f-44a3-ac66-4dc53bd8752b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:16:35 old-k8s-version-239115 crio[646]: time="2024-07-31 21:16:35.934683900Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d45917c8-5b6f-44a3-ac66-4dc53bd8752b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:16:35 old-k8s-version-239115 crio[646]: time="2024-07-31 21:16:35.970339092Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d58089f5-5db1-4751-8618-0e65b5df9d34 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:16:35 old-k8s-version-239115 crio[646]: time="2024-07-31 21:16:35.970414878Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d58089f5-5db1-4751-8618-0e65b5df9d34 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:16:35 old-k8s-version-239115 crio[646]: time="2024-07-31 21:16:35.971803136Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=94578c45-f11e-45ea-9722-2278c3f6550d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:16:35 old-k8s-version-239115 crio[646]: time="2024-07-31 21:16:35.972190616Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460595972163539,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=94578c45-f11e-45ea-9722-2278c3f6550d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:16:35 old-k8s-version-239115 crio[646]: time="2024-07-31 21:16:35.972803495Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6837e950-2a4d-4b0b-a7f1-9c55154b1453 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:16:35 old-k8s-version-239115 crio[646]: time="2024-07-31 21:16:35.972856908Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6837e950-2a4d-4b0b-a7f1-9c55154b1453 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:16:35 old-k8s-version-239115 crio[646]: time="2024-07-31 21:16:35.972889327Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6837e950-2a4d-4b0b-a7f1-9c55154b1453 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul31 20:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.062231] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.050403] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.190389] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.608719] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.611027] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.653908] systemd-fstab-generator[565]: Ignoring "noauto" option for root device
	[  +0.062587] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060554] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.234631] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.143128] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.268421] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +6.725014] systemd-fstab-generator[835]: Ignoring "noauto" option for root device
	[  +0.065215] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.078703] systemd-fstab-generator[960]: Ignoring "noauto" option for root device
	[ +10.116461] kauditd_printk_skb: 46 callbacks suppressed
	[Jul31 21:03] systemd-fstab-generator[5008]: Ignoring "noauto" option for root device
	[Jul31 21:05] systemd-fstab-generator[5292]: Ignoring "noauto" option for root device
	[  +0.069669] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:16:36 up 17 min,  0 users,  load average: 0.00, 0.04, 0.06
	Linux old-k8s-version-239115 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 31 21:16:30 old-k8s-version-239115 kubelet[6469]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Jul 31 21:16:30 old-k8s-version-239115 kubelet[6469]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Jul 31 21:16:30 old-k8s-version-239115 kubelet[6469]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Jul 31 21:16:30 old-k8s-version-239115 kubelet[6469]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00055b6f0)
	Jul 31 21:16:30 old-k8s-version-239115 kubelet[6469]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Jul 31 21:16:30 old-k8s-version-239115 kubelet[6469]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000979ef0, 0x4f0ac20, 0xc000b9e4b0, 0x1, 0xc00009e0c0)
	Jul 31 21:16:30 old-k8s-version-239115 kubelet[6469]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Jul 31 21:16:30 old-k8s-version-239115 kubelet[6469]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0001f7180, 0xc00009e0c0)
	Jul 31 21:16:30 old-k8s-version-239115 kubelet[6469]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jul 31 21:16:30 old-k8s-version-239115 kubelet[6469]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jul 31 21:16:30 old-k8s-version-239115 kubelet[6469]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jul 31 21:16:30 old-k8s-version-239115 kubelet[6469]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000ba03b0, 0xc000987900)
	Jul 31 21:16:30 old-k8s-version-239115 kubelet[6469]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jul 31 21:16:30 old-k8s-version-239115 kubelet[6469]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jul 31 21:16:30 old-k8s-version-239115 kubelet[6469]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jul 31 21:16:30 old-k8s-version-239115 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 31 21:16:30 old-k8s-version-239115 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 31 21:16:31 old-k8s-version-239115 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Jul 31 21:16:31 old-k8s-version-239115 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 31 21:16:31 old-k8s-version-239115 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 31 21:16:31 old-k8s-version-239115 kubelet[6478]: I0731 21:16:31.212710    6478 server.go:416] Version: v1.20.0
	Jul 31 21:16:31 old-k8s-version-239115 kubelet[6478]: I0731 21:16:31.212974    6478 server.go:837] Client rotation is on, will bootstrap in background
	Jul 31 21:16:31 old-k8s-version-239115 kubelet[6478]: I0731 21:16:31.214966    6478 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 31 21:16:31 old-k8s-version-239115 kubelet[6478]: W0731 21:16:31.215878    6478 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 31 21:16:31 old-k8s-version-239115 kubelet[6478]: I0731 21:16:31.216060    6478 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-239115 -n old-k8s-version-239115
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-239115 -n old-k8s-version-239115: exit status 2 (230.058986ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-239115" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (435.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-125614 -n default-k8s-diff-port-125614
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-31 21:19:55.953577808 +0000 UTC m=+6768.203935639
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-125614 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-125614 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.225µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-125614 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-125614 -n default-k8s-diff-port-125614
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-125614 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-125614 logs -n 25: (1.179789303s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p default-k8s-diff-port-125614  | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC | 31 Jul 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC |                     |
	|         | default-k8s-diff-port-125614                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-239115        | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-831240                 | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-831240                                  | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC | 31 Jul 24 21:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-916885                  | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-916885 --memory=2200                     | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 20:54 UTC | 31 Jul 24 21:04 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-125614       | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:54 UTC | 31 Jul 24 21:03 UTC |
	|         | default-k8s-diff-port-125614                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-239115                              | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:55 UTC | 31 Jul 24 20:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-239115             | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:55 UTC | 31 Jul 24 20:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-239115                              | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-239115                              | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 21:18 UTC | 31 Jul 24 21:18 UTC |
	| start   | -p newest-cni-586791 --memory=2200 --alsologtostderr   | newest-cni-586791            | jenkins | v1.33.1 | 31 Jul 24 21:18 UTC | 31 Jul 24 21:18 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| delete  | -p no-preload-916885                                   | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 21:18 UTC | 31 Jul 24 21:18 UTC |
	| addons  | enable metrics-server -p newest-cni-586791             | newest-cni-586791            | jenkins | v1.33.1 | 31 Jul 24 21:18 UTC | 31 Jul 24 21:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-586791                                   | newest-cni-586791            | jenkins | v1.33.1 | 31 Jul 24 21:18 UTC | 31 Jul 24 21:19 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-586791                  | newest-cni-586791            | jenkins | v1.33.1 | 31 Jul 24 21:19 UTC | 31 Jul 24 21:19 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-586791 --memory=2200 --alsologtostderr   | newest-cni-586791            | jenkins | v1.33.1 | 31 Jul 24 21:19 UTC | 31 Jul 24 21:19 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| image   | newest-cni-586791 image list                           | newest-cni-586791            | jenkins | v1.33.1 | 31 Jul 24 21:19 UTC | 31 Jul 24 21:19 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-586791                                   | newest-cni-586791            | jenkins | v1.33.1 | 31 Jul 24 21:19 UTC | 31 Jul 24 21:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-586791                                   | newest-cni-586791            | jenkins | v1.33.1 | 31 Jul 24 21:19 UTC | 31 Jul 24 21:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-586791                                   | newest-cni-586791            | jenkins | v1.33.1 | 31 Jul 24 21:19 UTC | 31 Jul 24 21:19 UTC |
	| delete  | -p newest-cni-586791                                   | newest-cni-586791            | jenkins | v1.33.1 | 31 Jul 24 21:19 UTC | 31 Jul 24 21:19 UTC |
	| delete  | -p embed-certs-831240                                  | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 21:19 UTC | 31 Jul 24 21:19 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 21:19:05
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 21:19:05.761771  195816 out.go:291] Setting OutFile to fd 1 ...
	I0731 21:19:05.761889  195816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:19:05.761897  195816 out.go:304] Setting ErrFile to fd 2...
	I0731 21:19:05.761901  195816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:19:05.762080  195816 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 21:19:05.762593  195816 out.go:298] Setting JSON to false
	I0731 21:19:05.763476  195816 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10882,"bootTime":1722449864,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 21:19:05.763532  195816 start.go:139] virtualization: kvm guest
	I0731 21:19:05.765667  195816 out.go:177] * [newest-cni-586791] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 21:19:05.767081  195816 notify.go:220] Checking for updates...
	I0731 21:19:05.767099  195816 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 21:19:05.768459  195816 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 21:19:05.769907  195816 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 21:19:05.771337  195816 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 21:19:05.772648  195816 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 21:19:05.773990  195816 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 21:19:05.775596  195816 config.go:182] Loaded profile config "newest-cni-586791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 21:19:05.775958  195816 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:19:05.776003  195816 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:19:05.791822  195816 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37327
	I0731 21:19:05.792250  195816 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:19:05.792776  195816 main.go:141] libmachine: Using API Version  1
	I0731 21:19:05.792799  195816 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:19:05.793105  195816 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:19:05.793294  195816 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:19:05.793590  195816 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 21:19:05.793882  195816 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:19:05.793920  195816 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:19:05.810777  195816 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39825
	I0731 21:19:05.811278  195816 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:19:05.811803  195816 main.go:141] libmachine: Using API Version  1
	I0731 21:19:05.811826  195816 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:19:05.812122  195816 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:19:05.812294  195816 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:19:05.846760  195816 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 21:19:05.848266  195816 start.go:297] selected driver: kvm2
	I0731 21:19:05.848283  195816 start.go:901] validating driver "kvm2" against &{Name:newest-cni-586791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0-beta.0 ClusterName:newest-cni-586791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system
_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:19:05.848437  195816 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 21:19:05.849205  195816 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:19:05.849282  195816 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-121704/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 21:19:05.864357  195816 install.go:137] /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0731 21:19:05.864764  195816 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0731 21:19:05.864843  195816 cni.go:84] Creating CNI manager for ""
	I0731 21:19:05.864864  195816 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:19:05.864906  195816 start.go:340] cluster config:
	{Name:newest-cni-586791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-586791 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAd
dress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:19:05.865016  195816 iso.go:125] acquiring lock: {Name:mk69fc0fe37180b19ce91307b5e12ab2d0bd69fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:19:05.866876  195816 out.go:177] * Starting "newest-cni-586791" primary control-plane node in "newest-cni-586791" cluster
	I0731 21:19:05.868074  195816 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 21:19:05.868111  195816 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0731 21:19:05.868122  195816 cache.go:56] Caching tarball of preloaded images
	I0731 21:19:05.868210  195816 preload.go:172] Found /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 21:19:05.868221  195816 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0731 21:19:05.868314  195816 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/newest-cni-586791/config.json ...
	I0731 21:19:05.868485  195816 start.go:360] acquireMachinesLock for newest-cni-586791: {Name:mk0ee20c9dba367bb5e62f6affdfe6f589095d2a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 21:19:05.868555  195816 start.go:364] duration metric: took 51.983µs to acquireMachinesLock for "newest-cni-586791"
	I0731 21:19:05.868571  195816 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:19:05.868579  195816 fix.go:54] fixHost starting: 
	I0731 21:19:05.868864  195816 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:19:05.868896  195816 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:19:05.884338  195816 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40253
	I0731 21:19:05.884817  195816 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:19:05.885288  195816 main.go:141] libmachine: Using API Version  1
	I0731 21:19:05.885303  195816 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:19:05.885681  195816 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:19:05.885899  195816 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:19:05.886084  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetState
	I0731 21:19:05.887945  195816 fix.go:112] recreateIfNeeded on newest-cni-586791: state=Stopped err=<nil>
	I0731 21:19:05.887987  195816 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	W0731 21:19:05.888180  195816 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:19:05.890790  195816 out.go:177] * Restarting existing kvm2 VM for "newest-cni-586791" ...
	I0731 21:19:05.892018  195816 main.go:141] libmachine: (newest-cni-586791) Calling .Start
	I0731 21:19:05.892197  195816 main.go:141] libmachine: (newest-cni-586791) Ensuring networks are active...
	I0731 21:19:05.892908  195816 main.go:141] libmachine: (newest-cni-586791) Ensuring network default is active
	I0731 21:19:05.893261  195816 main.go:141] libmachine: (newest-cni-586791) Ensuring network mk-newest-cni-586791 is active
	I0731 21:19:05.893620  195816 main.go:141] libmachine: (newest-cni-586791) Getting domain xml...
	I0731 21:19:05.894297  195816 main.go:141] libmachine: (newest-cni-586791) Creating domain...
	I0731 21:19:07.138447  195816 main.go:141] libmachine: (newest-cni-586791) Waiting to get IP...
	I0731 21:19:07.139504  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:07.139923  195816 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:19:07.140000  195816 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:19:07.139908  195851 retry.go:31] will retry after 254.920523ms: waiting for machine to come up
	I0731 21:19:07.396542  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:07.397038  195816 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:19:07.397061  195816 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:19:07.396985  195851 retry.go:31] will retry after 250.333596ms: waiting for machine to come up
	I0731 21:19:07.649421  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:07.649965  195816 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:19:07.649992  195816 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:19:07.649905  195851 retry.go:31] will retry after 395.636435ms: waiting for machine to come up
	I0731 21:19:08.047593  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:08.047975  195816 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:19:08.048007  195816 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:19:08.047938  195851 retry.go:31] will retry after 436.386926ms: waiting for machine to come up
	I0731 21:19:08.485674  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:08.486135  195816 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:19:08.486165  195816 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:19:08.486089  195851 retry.go:31] will retry after 490.347633ms: waiting for machine to come up
	I0731 21:19:08.977949  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:08.978481  195816 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:19:08.978512  195816 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:19:08.978429  195851 retry.go:31] will retry after 623.333636ms: waiting for machine to come up
	I0731 21:19:09.602897  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:09.603418  195816 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:19:09.603447  195816 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:19:09.603359  195851 retry.go:31] will retry after 996.812783ms: waiting for machine to come up
	I0731 21:19:10.601466  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:10.601947  195816 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:19:10.601977  195816 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:19:10.601898  195851 retry.go:31] will retry after 1.289057078s: waiting for machine to come up
	I0731 21:19:11.892558  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:11.892995  195816 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:19:11.893027  195816 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:19:11.892953  195851 retry.go:31] will retry after 1.739936764s: waiting for machine to come up
	I0731 21:19:13.634458  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:13.634910  195816 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:19:13.634942  195816 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:19:13.634861  195851 retry.go:31] will retry after 1.886570052s: waiting for machine to come up
	I0731 21:19:15.523611  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:15.524088  195816 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:19:15.524119  195816 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:19:15.524037  195851 retry.go:31] will retry after 2.741852261s: waiting for machine to come up
	I0731 21:19:18.267418  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:18.267884  195816 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:19:18.267911  195816 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:19:18.267838  195851 retry.go:31] will retry after 2.817878514s: waiting for machine to come up
	I0731 21:19:21.087488  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:21.087925  195816 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:19:21.087962  195816 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:19:21.087888  195851 retry.go:31] will retry after 3.35967442s: waiting for machine to come up
	I0731 21:19:24.451374  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:24.451865  195816 main.go:141] libmachine: (newest-cni-586791) Found IP for machine: 192.168.61.136
	I0731 21:19:24.451885  195816 main.go:141] libmachine: (newest-cni-586791) Reserving static IP address...
	I0731 21:19:24.451898  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has current primary IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:24.452592  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "newest-cni-586791", mac: "52:54:00:c5:e4:c3", ip: "192.168.61.136"} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:24.452621  195816 main.go:141] libmachine: (newest-cni-586791) Reserved static IP address: 192.168.61.136
	I0731 21:19:24.452634  195816 main.go:141] libmachine: (newest-cni-586791) DBG | skip adding static IP to network mk-newest-cni-586791 - found existing host DHCP lease matching {name: "newest-cni-586791", mac: "52:54:00:c5:e4:c3", ip: "192.168.61.136"}
	I0731 21:19:24.452645  195816 main.go:141] libmachine: (newest-cni-586791) DBG | Getting to WaitForSSH function...
	I0731 21:19:24.452658  195816 main.go:141] libmachine: (newest-cni-586791) Waiting for SSH to be available...
	I0731 21:19:24.455301  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:24.455684  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:24.455718  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:24.455790  195816 main.go:141] libmachine: (newest-cni-586791) DBG | Using SSH client type: external
	I0731 21:19:24.455837  195816 main.go:141] libmachine: (newest-cni-586791) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791/id_rsa (-rw-------)
	I0731 21:19:24.455881  195816 main.go:141] libmachine: (newest-cni-586791) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:19:24.455898  195816 main.go:141] libmachine: (newest-cni-586791) DBG | About to run SSH command:
	I0731 21:19:24.455909  195816 main.go:141] libmachine: (newest-cni-586791) DBG | exit 0
	I0731 21:19:24.585776  195816 main.go:141] libmachine: (newest-cni-586791) DBG | SSH cmd err, output: <nil>: 
	I0731 21:19:24.586175  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetConfigRaw
	I0731 21:19:24.586910  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetIP
	I0731 21:19:24.589668  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:24.589997  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:24.590066  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:24.590418  195816 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/newest-cni-586791/config.json ...
	I0731 21:19:24.590652  195816 machine.go:94] provisionDockerMachine start ...
	I0731 21:19:24.590673  195816 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:19:24.590907  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:19:24.593553  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:24.593922  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:24.593945  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:24.594101  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:19:24.594328  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:24.594503  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:24.594639  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:19:24.594827  195816 main.go:141] libmachine: Using SSH client type: native
	I0731 21:19:24.595016  195816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0731 21:19:24.595027  195816 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:19:24.705959  195816 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 21:19:24.705996  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetMachineName
	I0731 21:19:24.706261  195816 buildroot.go:166] provisioning hostname "newest-cni-586791"
	I0731 21:19:24.706294  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetMachineName
	I0731 21:19:24.706509  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:19:24.709299  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:24.709673  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:24.709707  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:24.709775  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:19:24.709976  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:24.710140  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:24.710277  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:19:24.710491  195816 main.go:141] libmachine: Using SSH client type: native
	I0731 21:19:24.710694  195816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0731 21:19:24.710710  195816 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-586791 && echo "newest-cni-586791" | sudo tee /etc/hostname
	I0731 21:19:24.840802  195816 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-586791
	
	I0731 21:19:24.840830  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:19:24.843581  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:24.843959  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:24.843982  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:24.844252  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:19:24.844448  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:24.844644  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:24.844782  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:19:24.844933  195816 main.go:141] libmachine: Using SSH client type: native
	I0731 21:19:24.845152  195816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0731 21:19:24.845180  195816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-586791' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-586791/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-586791' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:19:24.969599  195816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:19:24.969631  195816 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 21:19:24.969706  195816 buildroot.go:174] setting up certificates
	I0731 21:19:24.969720  195816 provision.go:84] configureAuth start
	I0731 21:19:24.969740  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetMachineName
	I0731 21:19:24.970090  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetIP
	I0731 21:19:24.973184  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:24.973592  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:24.973646  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:24.973764  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:19:24.976025  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:24.976355  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:24.976394  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:24.976555  195816 provision.go:143] copyHostCerts
	I0731 21:19:24.976607  195816 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 21:19:24.976617  195816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 21:19:24.976683  195816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 21:19:24.976788  195816 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 21:19:24.976797  195816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 21:19:24.976820  195816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 21:19:24.976872  195816 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 21:19:24.976881  195816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 21:19:24.976911  195816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 21:19:24.976979  195816 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.newest-cni-586791 san=[127.0.0.1 192.168.61.136 localhost minikube newest-cni-586791]
	I0731 21:19:25.035238  195816 provision.go:177] copyRemoteCerts
	I0731 21:19:25.035297  195816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:19:25.035330  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:19:25.037856  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:25.038216  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:25.038257  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:25.038475  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:19:25.038660  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:25.038818  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:19:25.038944  195816 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791/id_rsa Username:docker}
	I0731 21:19:25.129256  195816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:19:25.157699  195816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 21:19:25.183755  195816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 21:19:25.209683  195816 provision.go:87] duration metric: took 239.949293ms to configureAuth
	I0731 21:19:25.209712  195816 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:19:25.209890  195816 config.go:182] Loaded profile config "newest-cni-586791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 21:19:25.209964  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:19:25.212368  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:25.212729  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:25.212757  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:25.212967  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:19:25.213149  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:25.213322  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:25.213515  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:19:25.213731  195816 main.go:141] libmachine: Using SSH client type: native
	I0731 21:19:25.213905  195816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0731 21:19:25.213922  195816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:19:25.498098  195816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:19:25.498132  195816 machine.go:97] duration metric: took 907.465894ms to provisionDockerMachine
	I0731 21:19:25.498144  195816 start.go:293] postStartSetup for "newest-cni-586791" (driver="kvm2")
	I0731 21:19:25.498159  195816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:19:25.498180  195816 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:19:25.498573  195816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:19:25.498612  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:19:25.501226  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:25.501555  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:25.501582  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:25.501781  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:19:25.501996  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:25.502177  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:19:25.502292  195816 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791/id_rsa Username:docker}
	I0731 21:19:25.592660  195816 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:19:25.596907  195816 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:19:25.596932  195816 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 21:19:25.596986  195816 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 21:19:25.597054  195816 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 21:19:25.597147  195816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:19:25.608036  195816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 21:19:25.632418  195816 start.go:296] duration metric: took 134.258032ms for postStartSetup
	I0731 21:19:25.632459  195816 fix.go:56] duration metric: took 19.763879225s for fixHost
	I0731 21:19:25.632488  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:19:25.635194  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:25.635549  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:25.635592  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:25.635764  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:19:25.635963  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:25.636133  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:25.636285  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:19:25.636462  195816 main.go:141] libmachine: Using SSH client type: native
	I0731 21:19:25.636682  195816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0731 21:19:25.636695  195816 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:19:25.750024  195816 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722460765.723824092
	
	I0731 21:19:25.750056  195816 fix.go:216] guest clock: 1722460765.723824092
	I0731 21:19:25.750065  195816 fix.go:229] Guest: 2024-07-31 21:19:25.723824092 +0000 UTC Remote: 2024-07-31 21:19:25.632466287 +0000 UTC m=+19.907513448 (delta=91.357805ms)
	I0731 21:19:25.750087  195816 fix.go:200] guest clock delta is within tolerance: 91.357805ms
	I0731 21:19:25.750092  195816 start.go:83] releasing machines lock for "newest-cni-586791", held for 19.881526223s
	I0731 21:19:25.750110  195816 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:19:25.750424  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetIP
	I0731 21:19:25.753075  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:25.753406  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:25.753437  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:25.753610  195816 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:19:25.754084  195816 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:19:25.754250  195816 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:19:25.754341  195816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:19:25.754380  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:19:25.754499  195816 ssh_runner.go:195] Run: cat /version.json
	I0731 21:19:25.754522  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:19:25.756670  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:25.756945  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:25.756974  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:25.757079  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:19:25.757236  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:25.757260  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:25.757449  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:19:25.757488  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:25.757508  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:25.757641  195816 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791/id_rsa Username:docker}
	I0731 21:19:25.757660  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:19:25.757812  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:25.757985  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:19:25.758165  195816 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791/id_rsa Username:docker}
	I0731 21:19:25.864366  195816 ssh_runner.go:195] Run: systemctl --version
	I0731 21:19:25.870216  195816 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:19:26.015775  195816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:19:26.022090  195816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:19:26.022170  195816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:19:26.041572  195816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:19:26.041598  195816 start.go:495] detecting cgroup driver to use...
	I0731 21:19:26.041685  195816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:19:26.063400  195816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:19:26.078176  195816 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:19:26.078245  195816 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:19:26.092273  195816 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:19:26.106273  195816 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:19:26.229649  195816 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:19:26.400310  195816 docker.go:233] disabling docker service ...
	I0731 21:19:26.400378  195816 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:19:26.415255  195816 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:19:26.429142  195816 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:19:26.573649  195816 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:19:26.702613  195816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:19:26.717331  195816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:19:26.737042  195816 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0731 21:19:26.737117  195816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:19:26.747878  195816 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:19:26.747967  195816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:19:26.759026  195816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:19:26.769442  195816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:19:26.779973  195816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:19:26.790766  195816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:19:26.801848  195816 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:19:26.829376  195816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:19:26.841230  195816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:19:26.851118  195816 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:19:26.851204  195816 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:19:26.866373  195816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:19:26.876299  195816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:19:27.004064  195816 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:19:27.148522  195816 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:19:27.148608  195816 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:19:27.153659  195816 start.go:563] Will wait 60s for crictl version
	I0731 21:19:27.153725  195816 ssh_runner.go:195] Run: which crictl
	I0731 21:19:27.158003  195816 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:19:27.199907  195816 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:19:27.200005  195816 ssh_runner.go:195] Run: crio --version
	I0731 21:19:27.229808  195816 ssh_runner.go:195] Run: crio --version
	I0731 21:19:27.264656  195816 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0731 21:19:27.266274  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetIP
	I0731 21:19:27.269001  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:27.269370  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:27.269398  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:27.269652  195816 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0731 21:19:27.273963  195816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:19:27.289973  195816 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0731 21:19:27.291272  195816 kubeadm.go:883] updating cluster {Name:newest-cni-586791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:newest-cni-586791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:19:27.291408  195816 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 21:19:27.291483  195816 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:19:27.328970  195816 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0731 21:19:27.329063  195816 ssh_runner.go:195] Run: which lz4
	I0731 21:19:27.333268  195816 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 21:19:27.337498  195816 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 21:19:27.337527  195816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (387176433 bytes)
	I0731 21:19:28.692994  195816 crio.go:462] duration metric: took 1.359764132s to copy over tarball
	I0731 21:19:28.693099  195816 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 21:19:30.805542  195816 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.112408075s)
	I0731 21:19:30.805575  195816 crio.go:469] duration metric: took 2.112541998s to extract the tarball
	I0731 21:19:30.805584  195816 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 21:19:30.844752  195816 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:19:30.895835  195816 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 21:19:30.895867  195816 cache_images.go:84] Images are preloaded, skipping loading
	I0731 21:19:30.895878  195816 kubeadm.go:934] updating node { 192.168.61.136 8443 v1.31.0-beta.0 crio true true} ...
	I0731 21:19:30.896013  195816 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-586791 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-586791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:19:30.896099  195816 ssh_runner.go:195] Run: crio config
	I0731 21:19:30.946999  195816 cni.go:84] Creating CNI manager for ""
	I0731 21:19:30.947020  195816 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:19:30.947037  195816 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0731 21:19:30.947059  195816 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.136 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-586791 NodeName:newest-cni-586791 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] Feature
Args:map[] NodeIP:192.168.61.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 21:19:30.947201  195816 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-586791"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:19:30.947272  195816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0731 21:19:30.959102  195816 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:19:30.959185  195816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:19:30.969162  195816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0731 21:19:30.988111  195816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0731 21:19:31.007271  195816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0731 21:19:31.026649  195816 ssh_runner.go:195] Run: grep 192.168.61.136	control-plane.minikube.internal$ /etc/hosts
	I0731 21:19:31.030890  195816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:19:31.044216  195816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:19:31.174258  195816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:19:31.192546  195816 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/newest-cni-586791 for IP: 192.168.61.136
	I0731 21:19:31.192573  195816 certs.go:194] generating shared ca certs ...
	I0731 21:19:31.192594  195816 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:19:31.192789  195816 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 21:19:31.192846  195816 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 21:19:31.192860  195816 certs.go:256] generating profile certs ...
	I0731 21:19:31.192968  195816 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/newest-cni-586791/client.key
	I0731 21:19:31.193042  195816 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/newest-cni-586791/apiserver.key.4c93ecd9
	I0731 21:19:31.193091  195816 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/newest-cni-586791/proxy-client.key
	I0731 21:19:31.193258  195816 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 21:19:31.193308  195816 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 21:19:31.193324  195816 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 21:19:31.193385  195816 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:19:31.193427  195816 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:19:31.193462  195816 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 21:19:31.193517  195816 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 21:19:31.194280  195816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:19:31.239627  195816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:19:31.281751  195816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:19:31.324813  195816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 21:19:31.355841  195816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/newest-cni-586791/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 21:19:31.383375  195816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/newest-cni-586791/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 21:19:31.410240  195816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/newest-cni-586791/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:19:31.435825  195816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/newest-cni-586791/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 21:19:31.460792  195816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 21:19:31.485288  195816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:19:31.510081  195816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 21:19:31.533945  195816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:19:31.552444  195816 ssh_runner.go:195] Run: openssl version
	I0731 21:19:31.558469  195816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 21:19:31.569794  195816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 21:19:31.574378  195816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 21:19:31.574452  195816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 21:19:31.580453  195816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:19:31.592481  195816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:19:31.605046  195816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:19:31.609667  195816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:19:31.609739  195816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:19:31.615432  195816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:19:31.626653  195816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 21:19:31.637698  195816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 21:19:31.642080  195816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 21:19:31.642132  195816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 21:19:31.648059  195816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 21:19:31.659294  195816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:19:31.663913  195816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:19:31.669771  195816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:19:31.675581  195816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:19:31.682300  195816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:19:31.688600  195816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:19:31.694478  195816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:19:31.700326  195816 kubeadm.go:392] StartCluster: {Name:newest-cni-586791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:newest-cni-586791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartH
ostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:19:31.700448  195816 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:19:31.700501  195816 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:19:31.738050  195816 cri.go:89] found id: ""
	I0731 21:19:31.738115  195816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:19:31.748708  195816 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 21:19:31.748730  195816 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 21:19:31.748791  195816 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 21:19:31.758759  195816 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 21:19:31.759577  195816 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-586791" does not appear in /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 21:19:31.760068  195816 kubeconfig.go:62] /home/jenkins/minikube-integration/19355-121704/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-586791" cluster setting kubeconfig missing "newest-cni-586791" context setting]
	I0731 21:19:31.760855  195816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:19:31.762368  195816 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 21:19:31.772447  195816 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.136
	I0731 21:19:31.772480  195816 kubeadm.go:1160] stopping kube-system containers ...
	I0731 21:19:31.772494  195816 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 21:19:31.772548  195816 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:19:31.817842  195816 cri.go:89] found id: ""
	I0731 21:19:31.817929  195816 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 21:19:31.835824  195816 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:19:31.845648  195816 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:19:31.845670  195816 kubeadm.go:157] found existing configuration files:
	
	I0731 21:19:31.845721  195816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:19:31.855627  195816 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:19:31.855692  195816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:19:31.865329  195816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:19:31.874329  195816 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:19:31.874404  195816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:19:31.884650  195816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:19:31.894584  195816 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:19:31.894653  195816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:19:31.905038  195816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:19:31.914559  195816 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:19:31.914622  195816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:19:31.925440  195816 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:19:31.935796  195816 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:19:32.056258  195816 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:19:32.826025  195816 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:19:33.063699  195816 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:19:33.126259  195816 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:19:33.229904  195816 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:19:33.230005  195816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:19:33.731021  195816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:19:34.230174  195816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:19:34.730093  195816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:19:35.230571  195816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:19:35.245555  195816 api_server.go:72] duration metric: took 2.015653275s to wait for apiserver process to appear ...
	I0731 21:19:35.245580  195816 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:19:35.245603  195816 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0731 21:19:35.246026  195816 api_server.go:269] stopped: https://192.168.61.136:8443/healthz: Get "https://192.168.61.136:8443/healthz": dial tcp 192.168.61.136:8443: connect: connection refused
	I0731 21:19:35.745866  195816 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0731 21:19:37.982917  195816 api_server.go:279] https://192.168.61.136:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:19:37.982947  195816 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:19:37.982963  195816 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0731 21:19:38.047150  195816 api_server.go:279] https://192.168.61.136:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:19:38.047182  195816 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:19:38.246570  195816 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0731 21:19:38.254560  195816 api_server.go:279] https://192.168.61.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:19:38.254594  195816 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:19:38.745689  195816 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0731 21:19:38.750288  195816 api_server.go:279] https://192.168.61.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:19:38.750322  195816 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:19:39.246508  195816 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0731 21:19:39.250743  195816 api_server.go:279] https://192.168.61.136:8443/healthz returned 200:
	ok
	I0731 21:19:39.257921  195816 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 21:19:39.257953  195816 api_server.go:131] duration metric: took 4.012364546s to wait for apiserver health ...
	I0731 21:19:39.257965  195816 cni.go:84] Creating CNI manager for ""
	I0731 21:19:39.257974  195816 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:19:39.259595  195816 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:19:39.261022  195816 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:19:39.272791  195816 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:19:39.293449  195816 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:19:39.302934  195816 system_pods.go:59] 8 kube-system pods found
	I0731 21:19:39.302972  195816 system_pods.go:61] "coredns-5cfdc65f69-ncmmv" [9d4123f3-0bea-4ddc-9178-8ff3e8c2c903] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:19:39.302983  195816 system_pods.go:61] "etcd-newest-cni-586791" [33a5d651-e33e-4b97-9727-0587fccb79ef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 21:19:39.302993  195816 system_pods.go:61] "kube-apiserver-newest-cni-586791" [d1344d91-f88f-439b-8a35-3c3a5ba7c347] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 21:19:39.303001  195816 system_pods.go:61] "kube-controller-manager-newest-cni-586791" [2f13bf79-a075-464d-be20-3945de8a453b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 21:19:39.303019  195816 system_pods.go:61] "kube-proxy-5w5q8" [f6b5eab7-51b5-43ec-9e7d-c1489107d922] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:19:39.303028  195816 system_pods.go:61] "kube-scheduler-newest-cni-586791" [9fb1fafe-762b-40cd-bb68-4f5ab0f69d4b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 21:19:39.303039  195816 system_pods.go:61] "metrics-server-78fcd8795b-f9qfb" [6a57bd4b-35e4-41b8-898c-166e81df7e8c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:19:39.303052  195816 system_pods.go:61] "storage-provisioner" [fbc0ac03-73b2-4a78-8ff7-0f7bd55e91e8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:19:39.303065  195816 system_pods.go:74] duration metric: took 9.589202ms to wait for pod list to return data ...
	I0731 21:19:39.303075  195816 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:19:39.306693  195816 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:19:39.306720  195816 node_conditions.go:123] node cpu capacity is 2
	I0731 21:19:39.306732  195816 node_conditions.go:105] duration metric: took 3.649546ms to run NodePressure ...
	I0731 21:19:39.306756  195816 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:19:39.625239  195816 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:19:39.637663  195816 ops.go:34] apiserver oom_adj: -16
	I0731 21:19:39.637691  195816 kubeadm.go:597] duration metric: took 7.888952374s to restartPrimaryControlPlane
	I0731 21:19:39.637703  195816 kubeadm.go:394] duration metric: took 7.937393791s to StartCluster
	I0731 21:19:39.637725  195816 settings.go:142] acquiring lock: {Name:mk1f43113b3e416d694056e4890ac819b020378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:19:39.637805  195816 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 21:19:39.639388  195816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:19:39.639682  195816 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:19:39.639762  195816 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:19:39.639861  195816 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-586791"
	I0731 21:19:39.639892  195816 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-586791"
	W0731 21:19:39.639905  195816 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:19:39.639927  195816 addons.go:69] Setting metrics-server=true in profile "newest-cni-586791"
	I0731 21:19:39.639935  195816 config.go:182] Loaded profile config "newest-cni-586791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 21:19:39.639934  195816 addons.go:69] Setting dashboard=true in profile "newest-cni-586791"
	I0731 21:19:39.639930  195816 addons.go:69] Setting default-storageclass=true in profile "newest-cni-586791"
	I0731 21:19:39.639973  195816 addons.go:234] Setting addon metrics-server=true in "newest-cni-586791"
	W0731 21:19:39.639992  195816 addons.go:243] addon metrics-server should already be in state true
	I0731 21:19:39.639995  195816 addons.go:234] Setting addon dashboard=true in "newest-cni-586791"
	W0731 21:19:39.640004  195816 addons.go:243] addon dashboard should already be in state true
	I0731 21:19:39.640013  195816 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-586791"
	I0731 21:19:39.640028  195816 host.go:66] Checking if "newest-cni-586791" exists ...
	I0731 21:19:39.640028  195816 host.go:66] Checking if "newest-cni-586791" exists ...
	I0731 21:19:39.639938  195816 host.go:66] Checking if "newest-cni-586791" exists ...
	I0731 21:19:39.640424  195816 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:19:39.640445  195816 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:19:39.640456  195816 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:19:39.640468  195816 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:19:39.640497  195816 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:19:39.640546  195816 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:19:39.640551  195816 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:19:39.640576  195816 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:19:39.641413  195816 out.go:177] * Verifying Kubernetes components...
	I0731 21:19:39.642899  195816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:19:39.657576  195816 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37607
	I0731 21:19:39.658477  195816 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:19:39.659146  195816 main.go:141] libmachine: Using API Version  1
	I0731 21:19:39.659174  195816 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:19:39.659586  195816 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:19:39.659818  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetState
	I0731 21:19:39.660548  195816 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36535
	I0731 21:19:39.660704  195816 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37001
	I0731 21:19:39.660736  195816 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41187
	I0731 21:19:39.661140  195816 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:19:39.661756  195816 main.go:141] libmachine: Using API Version  1
	I0731 21:19:39.661777  195816 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:19:39.661797  195816 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:19:39.661870  195816 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:19:39.662171  195816 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:19:39.662355  195816 main.go:141] libmachine: Using API Version  1
	I0731 21:19:39.662397  195816 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:19:39.662371  195816 main.go:141] libmachine: Using API Version  1
	I0731 21:19:39.662459  195816 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:19:39.662790  195816 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:19:39.662817  195816 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:19:39.663246  195816 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:19:39.663248  195816 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:19:39.663832  195816 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:19:39.663878  195816 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:19:39.664245  195816 addons.go:234] Setting addon default-storageclass=true in "newest-cni-586791"
	W0731 21:19:39.664268  195816 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:19:39.664300  195816 host.go:66] Checking if "newest-cni-586791" exists ...
	I0731 21:19:39.664577  195816 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:19:39.664616  195816 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:19:39.664660  195816 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:19:39.664690  195816 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:19:39.680643  195816 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41131
	I0731 21:19:39.681577  195816 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:19:39.682391  195816 main.go:141] libmachine: Using API Version  1
	I0731 21:19:39.682415  195816 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:19:39.682785  195816 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:19:39.682964  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetState
	I0731 21:19:39.683573  195816 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38047
	I0731 21:19:39.684226  195816 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:19:39.684782  195816 main.go:141] libmachine: Using API Version  1
	I0731 21:19:39.684806  195816 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:19:39.685120  195816 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:19:39.685296  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetState
	I0731 21:19:39.685407  195816 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:19:39.685533  195816 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45907
	I0731 21:19:39.685968  195816 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:19:39.686519  195816 main.go:141] libmachine: Using API Version  1
	I0731 21:19:39.686539  195816 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:19:39.686954  195816 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:19:39.687010  195816 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33351
	I0731 21:19:39.687147  195816 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:19:39.687506  195816 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:19:39.687528  195816 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:19:39.687737  195816 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:19:39.687770  195816 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:19:39.687937  195816 main.go:141] libmachine: Using API Version  1
	I0731 21:19:39.687958  195816 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:19:39.688342  195816 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:19:39.688520  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetState
	I0731 21:19:39.688826  195816 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 21:19:39.688899  195816 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:19:39.688922  195816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:19:39.688941  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:19:39.690163  195816 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:19:39.690182  195816 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:19:39.690210  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:19:39.690529  195816 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:19:39.692998  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:39.693493  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:39.693527  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:39.693637  195816 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0731 21:19:39.694277  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:39.694367  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:19:39.694558  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:39.694626  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:39.694641  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:39.694717  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:19:39.694875  195816 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791/id_rsa Username:docker}
	I0731 21:19:39.695218  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:19:39.695857  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:39.696036  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:19:39.696207  195816 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791/id_rsa Username:docker}
	I0731 21:19:39.697797  195816 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0731 21:19:39.699210  195816 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0731 21:19:39.699226  195816 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0731 21:19:39.699239  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:19:39.702403  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:39.702771  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:39.702793  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:39.703049  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:19:39.703253  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:39.703432  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:19:39.703624  195816 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791/id_rsa Username:docker}
	I0731 21:19:39.707367  195816 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45589
	I0731 21:19:39.707839  195816 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:19:39.708393  195816 main.go:141] libmachine: Using API Version  1
	I0731 21:19:39.708421  195816 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:19:39.708786  195816 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:19:39.708987  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetState
	I0731 21:19:39.710855  195816 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:19:39.711091  195816 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:19:39.711108  195816 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:19:39.711124  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:19:39.714024  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:39.714424  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:39.714448  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:39.714563  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:19:39.714755  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:39.714900  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:19:39.715035  195816 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791/id_rsa Username:docker}
	I0731 21:19:39.833151  195816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:19:39.851102  195816 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:19:39.851198  195816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:19:39.865092  195816 api_server.go:72] duration metric: took 225.369543ms to wait for apiserver process to appear ...
	I0731 21:19:39.865115  195816 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:19:39.865134  195816 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0731 21:19:39.870078  195816 api_server.go:279] https://192.168.61.136:8443/healthz returned 200:
	ok
	I0731 21:19:39.871223  195816 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 21:19:39.871241  195816 api_server.go:131] duration metric: took 6.119625ms to wait for apiserver health ...
	I0731 21:19:39.871250  195816 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:19:39.877669  195816 system_pods.go:59] 8 kube-system pods found
	I0731 21:19:39.877703  195816 system_pods.go:61] "coredns-5cfdc65f69-ncmmv" [9d4123f3-0bea-4ddc-9178-8ff3e8c2c903] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:19:39.877719  195816 system_pods.go:61] "etcd-newest-cni-586791" [33a5d651-e33e-4b97-9727-0587fccb79ef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 21:19:39.877730  195816 system_pods.go:61] "kube-apiserver-newest-cni-586791" [d1344d91-f88f-439b-8a35-3c3a5ba7c347] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 21:19:39.877743  195816 system_pods.go:61] "kube-controller-manager-newest-cni-586791" [2f13bf79-a075-464d-be20-3945de8a453b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 21:19:39.877749  195816 system_pods.go:61] "kube-proxy-5w5q8" [f6b5eab7-51b5-43ec-9e7d-c1489107d922] Running
	I0731 21:19:39.877768  195816 system_pods.go:61] "kube-scheduler-newest-cni-586791" [9fb1fafe-762b-40cd-bb68-4f5ab0f69d4b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 21:19:39.877780  195816 system_pods.go:61] "metrics-server-78fcd8795b-f9qfb" [6a57bd4b-35e4-41b8-898c-166e81df7e8c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:19:39.877797  195816 system_pods.go:61] "storage-provisioner" [fbc0ac03-73b2-4a78-8ff7-0f7bd55e91e8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:19:39.877808  195816 system_pods.go:74] duration metric: took 6.550593ms to wait for pod list to return data ...
	I0731 21:19:39.877819  195816 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:19:39.880834  195816 default_sa.go:45] found service account: "default"
	I0731 21:19:39.880851  195816 default_sa.go:55] duration metric: took 3.025891ms for default service account to be created ...
	I0731 21:19:39.880861  195816 kubeadm.go:582] duration metric: took 241.142673ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0731 21:19:39.880874  195816 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:19:39.883590  195816 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:19:39.883611  195816 node_conditions.go:123] node cpu capacity is 2
	I0731 21:19:39.883623  195816 node_conditions.go:105] duration metric: took 2.743937ms to run NodePressure ...
	I0731 21:19:39.883637  195816 start.go:241] waiting for startup goroutines ...
	I0731 21:19:39.943269  195816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:19:39.979984  195816 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0731 21:19:39.980010  195816 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0731 21:19:39.984368  195816 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:19:39.984394  195816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 21:19:39.995538  195816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:19:40.009058  195816 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0731 21:19:40.009084  195816 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0731 21:19:40.026040  195816 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:19:40.026068  195816 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:19:40.115073  195816 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0731 21:19:40.115101  195816 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0731 21:19:40.138998  195816 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:19:40.139023  195816 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:19:40.220042  195816 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0731 21:19:40.220064  195816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0731 21:19:40.228886  195816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:19:40.419638  195816 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0731 21:19:40.419668  195816 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0731 21:19:40.452603  195816 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0731 21:19:40.452638  195816 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0731 21:19:40.555206  195816 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0731 21:19:40.555246  195816 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0731 21:19:40.647218  195816 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0731 21:19:40.647255  195816 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0731 21:19:40.670656  195816 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0731 21:19:40.670683  195816 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0731 21:19:40.694003  195816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0731 21:19:41.836210  195816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.892902167s)
	I0731 21:19:41.836270  195816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.840691495s)
	I0731 21:19:41.836321  195816 main.go:141] libmachine: Making call to close driver server
	I0731 21:19:41.836339  195816 main.go:141] libmachine: (newest-cni-586791) Calling .Close
	I0731 21:19:41.836348  195816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.607437143s)
	I0731 21:19:41.836375  195816 main.go:141] libmachine: Making call to close driver server
	I0731 21:19:41.836390  195816 main.go:141] libmachine: (newest-cni-586791) Calling .Close
	I0731 21:19:41.836273  195816 main.go:141] libmachine: Making call to close driver server
	I0731 21:19:41.836432  195816 main.go:141] libmachine: (newest-cni-586791) Calling .Close
	I0731 21:19:41.836638  195816 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:19:41.836651  195816 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:19:41.836660  195816 main.go:141] libmachine: Making call to close driver server
	I0731 21:19:41.836666  195816 main.go:141] libmachine: (newest-cni-586791) Calling .Close
	I0731 21:19:41.836739  195816 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:19:41.836746  195816 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:19:41.836753  195816 main.go:141] libmachine: Making call to close driver server
	I0731 21:19:41.836760  195816 main.go:141] libmachine: (newest-cni-586791) Calling .Close
	I0731 21:19:41.836846  195816 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:19:41.836865  195816 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:19:41.836882  195816 main.go:141] libmachine: Making call to close driver server
	I0731 21:19:41.836895  195816 main.go:141] libmachine: (newest-cni-586791) Calling .Close
	I0731 21:19:41.836960  195816 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:19:41.836958  195816 main.go:141] libmachine: (newest-cni-586791) DBG | Closing plugin on server side
	I0731 21:19:41.836967  195816 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:19:41.836978  195816 addons.go:475] Verifying addon metrics-server=true in "newest-cni-586791"
	I0731 21:19:41.837003  195816 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:19:41.837011  195816 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:19:41.838495  195816 main.go:141] libmachine: (newest-cni-586791) DBG | Closing plugin on server side
	I0731 21:19:41.838530  195816 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:19:41.838538  195816 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:19:41.847001  195816 main.go:141] libmachine: Making call to close driver server
	I0731 21:19:41.847028  195816 main.go:141] libmachine: (newest-cni-586791) Calling .Close
	I0731 21:19:41.847329  195816 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:19:41.847346  195816 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:19:41.847353  195816 main.go:141] libmachine: (newest-cni-586791) DBG | Closing plugin on server side
	I0731 21:19:42.210821  195816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.516766631s)
	I0731 21:19:42.210888  195816 main.go:141] libmachine: Making call to close driver server
	I0731 21:19:42.210904  195816 main.go:141] libmachine: (newest-cni-586791) Calling .Close
	I0731 21:19:42.211326  195816 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:19:42.211345  195816 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:19:42.211356  195816 main.go:141] libmachine: Making call to close driver server
	I0731 21:19:42.211365  195816 main.go:141] libmachine: (newest-cni-586791) Calling .Close
	I0731 21:19:42.211624  195816 main.go:141] libmachine: (newest-cni-586791) DBG | Closing plugin on server side
	I0731 21:19:42.211679  195816 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:19:42.211690  195816 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:19:42.213543  195816 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-586791 addons enable metrics-server
	
	I0731 21:19:42.215059  195816 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0731 21:19:42.216697  195816 addons.go:510] duration metric: took 2.576947973s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0731 21:19:42.216739  195816 start.go:246] waiting for cluster config update ...
	I0731 21:19:42.216753  195816 start.go:255] writing updated cluster config ...
	I0731 21:19:42.217029  195816 ssh_runner.go:195] Run: rm -f paused
	I0731 21:19:42.277240  195816 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0731 21:19:42.278995  195816 out.go:177] * Done! kubectl is now configured to use "newest-cni-586791" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 21:19:56 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:19:56.530865726Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:4e4dda22151ab2d0d2a14c28d9ca17e3c1fbc0d14b2fe8f9be498bbaf13f9f38,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-gnrgs,Uid:203ddf96-11cf-4fd3-8920-aa787815ad1a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722459561705354657,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-gnrgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 203ddf96-11cf-4fd3-8920-aa787815ad1a,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T20:59:13.794336637Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9bac55b298bd1b804418296dbf8030ce32f98912592975a97abab4ea208339bc,Metadata:&PodSandboxMetadata{Name:busybox,Uid:5df1bbfb-71e6-41df-a194-4eecaf14017f,Namespace:default,Attem
pt:0,},State:SANDBOX_READY,CreatedAt:1722459561700496332,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5df1bbfb-71e6-41df-a194-4eecaf14017f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T20:59:13.794324736Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a0dec65d242b43b63c2cd1ef4935cce1ea0b00d5b8635f1bb9aff38e0ad2a25d,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-jf52w,Uid:00b07830-8180-43c0-83c7-e68d399ae0ef,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722459559901102638,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-jf52w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00b07830-8180-43c0-83c7-e68d399ae0ef,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31
T20:59:13.794339454Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0a8448729d58dfd482bd5a49094c2fd4e4ed0e6720a8fb6924487f367ed04675,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:efc60c19-af1b-426e-82e2-5fb9a2d1fb3a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722459554120506111,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efc60c19-af1b-426e-82e2-5fb9a2d1fb3a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"g
cr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-31T20:59:13.794335125Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:56b1b1a1f978c26a4d8aea2f87a3ca208fcb7144a047f492332d447c822fd6b3,Metadata:&PodSandboxMetadata{Name:kube-proxy-csdc4,Uid:24077c7d-f54c-4a54-9791-742327f2a9d0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722459554112079678,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-csdc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24077c7d-f54c-4a54-9791-742327f2a9d0,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{ku
bernetes.io/config.seen: 2024-07-31T20:59:13.794338620Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:aedbb71d9cd72849d825f2a5157800099e6ea5357acbd4a8db4c3b9d6c1d969f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-125614,Uid:2c778033bc3423b3264c5cb56a14ff89,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722459549355270499,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c778033bc3423b3264c5cb56a14ff89,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.221:8444,kubernetes.io/config.hash: 2c778033bc3423b3264c5cb56a14ff89,kubernetes.io/config.seen: 2024-07-31T20:59:08.812293215Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e6e6c2fd49036f8575fa58820d4a20eca5f4b3342399d2530b0a0727071a48
db,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-125614,Uid:e21e3a7b3bc1fc9b5bb85bffd07df30f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722459549344042683,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e21e3a7b3bc1fc9b5bb85bffd07df30f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e21e3a7b3bc1fc9b5bb85bffd07df30f,kubernetes.io/config.seen: 2024-07-31T20:59:08.812367879Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9fb5f81259d301fa86a4c90e49c7318058e432e87fe6b7ce38020462786e512a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-125614,Uid:0e669529bce979d2f87bc85d9b56a4f6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722459549326023694,Labels:map[string]string{component: kube-controller-manager,
io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e669529bce979d2f87bc85d9b56a4f6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0e669529bce979d2f87bc85d9b56a4f6,kubernetes.io/config.seen: 2024-07-31T20:59:08.812365216Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cf8a88982129cb1c91958a98584e90ab8df7808a358fff0bef4bc8f6e0b68676,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-125614,Uid:ed232883cfe09c6a025fdae3562ed09d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722459549316589601,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed232883cfe09c6a025fdae3562ed09d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-clie
nt-urls: https://192.168.50.221:2379,kubernetes.io/config.hash: ed232883cfe09c6a025fdae3562ed09d,kubernetes.io/config.seen: 2024-07-31T20:59:08.848604454Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=bc4d8bf7-edc8-4b01-a0f0-253f5a7f94c1 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 21:19:56 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:19:56.531576878Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=955565cf-1184-4cdc-be7d-df22d8751c26 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:19:56 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:19:56.531641396Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=955565cf-1184-4cdc-be7d-df22d8751c26 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:19:56 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:19:56.531940654Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:79a527efd9238c960ee7781d00091d8e65af2116e40d7d550c8f8d951f23ab0d,PodSandboxId:9bac55b298bd1b804418296dbf8030ce32f98912592975a97abab4ea208339bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722459564730394018,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5df1bbfb-71e6-41df-a194-4eecaf14017f,},Annotations:map[string]string{io.kubernetes.container.hash: e205fdc1,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025,PodSandboxId:4e4dda22151ab2d0d2a14c28d9ca17e3c1fbc0d14b2fe8f9be498bbaf13f9f38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459562028189819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gnrgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 203ddf96-11cf-4fd3-8920-aa787815ad1a,},Annotations:map[string]string{io.kubernetes.container.hash: 1ecca4db,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173,PodSandboxId:0a8448729d58dfd482bd5a49094c2fd4e4ed0e6720a8fb6924487f367ed04675,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722459554971858843,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: efc60c19-af1b-426e-82e2-5fb9a2d1fb3a,},Annotations:map[string]string{io.kubernetes.container.hash: cd476810,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e,PodSandboxId:56b1b1a1f978c26a4d8aea2f87a3ca208fcb7144a047f492332d447c822fd6b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722459554287405359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-csdc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24077c7d-f
54c-4a54-9791-742327f2a9d0,},Annotations:map[string]string{io.kubernetes.container.hash: 5126dbb8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f,PodSandboxId:0a8448729d58dfd482bd5a49094c2fd4e4ed0e6720a8fb6924487f367ed04675,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722459554233607076,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efc60c19-af1b-426e-82e2
-5fb9a2d1fb3a,},Annotations:map[string]string{io.kubernetes.container.hash: cd476810,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c,PodSandboxId:cf8a88982129cb1c91958a98584e90ab8df7808a358fff0bef4bc8f6e0b68676,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722459549641232498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed232883cfe09c6a025fdae3562ed09d,},Annotations:map[
string]string{io.kubernetes.container.hash: 5a402b30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085,PodSandboxId:9fb5f81259d301fa86a4c90e49c7318058e432e87fe6b7ce38020462786e512a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722459549650587122,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e669529bce979d2f87bc85d9b
56a4f6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447,PodSandboxId:e6e6c2fd49036f8575fa58820d4a20eca5f4b3342399d2530b0a0727071a48db,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722459549583499363,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e21e3a7b3bc1fc9b5bb85bffd07d
f30f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718,PodSandboxId:aedbb71d9cd72849d825f2a5157800099e6ea5357acbd4a8db4c3b9d6c1d969f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722459549565760379,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c778033bc3423b3264c5cb56a14ff
89,},Annotations:map[string]string{io.kubernetes.container.hash: 1dcc80dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=955565cf-1184-4cdc-be7d-df22d8751c26 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:19:56 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:19:56.548357851Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d8e30fb2-32f0-49fb-8805-99d6f296c4b9 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:19:56 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:19:56.548430236Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d8e30fb2-32f0-49fb-8805-99d6f296c4b9 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:19:56 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:19:56.549757905Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=87206208-ab6a-4c51-8aba-644a46ae6d76 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:19:56 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:19:56.550145001Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460796550124274,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87206208-ab6a-4c51-8aba-644a46ae6d76 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:19:56 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:19:56.550590737Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a33f621c-1432-4294-bc21-1ba9578c1d67 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:19:56 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:19:56.550636441Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a33f621c-1432-4294-bc21-1ba9578c1d67 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:19:56 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:19:56.551154836Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:79a527efd9238c960ee7781d00091d8e65af2116e40d7d550c8f8d951f23ab0d,PodSandboxId:9bac55b298bd1b804418296dbf8030ce32f98912592975a97abab4ea208339bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722459564730394018,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5df1bbfb-71e6-41df-a194-4eecaf14017f,},Annotations:map[string]string{io.kubernetes.container.hash: e205fdc1,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025,PodSandboxId:4e4dda22151ab2d0d2a14c28d9ca17e3c1fbc0d14b2fe8f9be498bbaf13f9f38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459562028189819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gnrgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 203ddf96-11cf-4fd3-8920-aa787815ad1a,},Annotations:map[string]string{io.kubernetes.container.hash: 1ecca4db,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173,PodSandboxId:0a8448729d58dfd482bd5a49094c2fd4e4ed0e6720a8fb6924487f367ed04675,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722459554971858843,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: efc60c19-af1b-426e-82e2-5fb9a2d1fb3a,},Annotations:map[string]string{io.kubernetes.container.hash: cd476810,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e,PodSandboxId:56b1b1a1f978c26a4d8aea2f87a3ca208fcb7144a047f492332d447c822fd6b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722459554287405359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-csdc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24077c7d-f
54c-4a54-9791-742327f2a9d0,},Annotations:map[string]string{io.kubernetes.container.hash: 5126dbb8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f,PodSandboxId:0a8448729d58dfd482bd5a49094c2fd4e4ed0e6720a8fb6924487f367ed04675,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722459554233607076,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efc60c19-af1b-426e-82e2
-5fb9a2d1fb3a,},Annotations:map[string]string{io.kubernetes.container.hash: cd476810,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c,PodSandboxId:cf8a88982129cb1c91958a98584e90ab8df7808a358fff0bef4bc8f6e0b68676,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722459549641232498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed232883cfe09c6a025fdae3562ed09d,},Annotations:map[
string]string{io.kubernetes.container.hash: 5a402b30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085,PodSandboxId:9fb5f81259d301fa86a4c90e49c7318058e432e87fe6b7ce38020462786e512a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722459549650587122,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e669529bce979d2f87bc85d9b
56a4f6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447,PodSandboxId:e6e6c2fd49036f8575fa58820d4a20eca5f4b3342399d2530b0a0727071a48db,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722459549583499363,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e21e3a7b3bc1fc9b5bb85bffd07d
f30f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718,PodSandboxId:aedbb71d9cd72849d825f2a5157800099e6ea5357acbd4a8db4c3b9d6c1d969f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722459549565760379,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c778033bc3423b3264c5cb56a14ff
89,},Annotations:map[string]string{io.kubernetes.container.hash: 1dcc80dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a33f621c-1432-4294-bc21-1ba9578c1d67 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:19:56 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:19:56.588628164Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=79cba72f-2d07-497e-9836-d738b6c2f526 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:19:56 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:19:56.588884843Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=79cba72f-2d07-497e-9836-d738b6c2f526 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:19:56 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:19:56.590647970Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1e46b88d-c2bc-4171-957a-ed7120b590d1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:19:56 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:19:56.591221181Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460796591198756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1e46b88d-c2bc-4171-957a-ed7120b590d1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:19:56 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:19:56.591814968Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=68541e22-02a9-411a-98f3-35fc36350637 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:19:56 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:19:56.591885693Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=68541e22-02a9-411a-98f3-35fc36350637 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:19:56 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:19:56.592093445Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:79a527efd9238c960ee7781d00091d8e65af2116e40d7d550c8f8d951f23ab0d,PodSandboxId:9bac55b298bd1b804418296dbf8030ce32f98912592975a97abab4ea208339bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722459564730394018,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5df1bbfb-71e6-41df-a194-4eecaf14017f,},Annotations:map[string]string{io.kubernetes.container.hash: e205fdc1,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025,PodSandboxId:4e4dda22151ab2d0d2a14c28d9ca17e3c1fbc0d14b2fe8f9be498bbaf13f9f38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459562028189819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gnrgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 203ddf96-11cf-4fd3-8920-aa787815ad1a,},Annotations:map[string]string{io.kubernetes.container.hash: 1ecca4db,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173,PodSandboxId:0a8448729d58dfd482bd5a49094c2fd4e4ed0e6720a8fb6924487f367ed04675,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722459554971858843,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: efc60c19-af1b-426e-82e2-5fb9a2d1fb3a,},Annotations:map[string]string{io.kubernetes.container.hash: cd476810,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e,PodSandboxId:56b1b1a1f978c26a4d8aea2f87a3ca208fcb7144a047f492332d447c822fd6b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722459554287405359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-csdc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24077c7d-f
54c-4a54-9791-742327f2a9d0,},Annotations:map[string]string{io.kubernetes.container.hash: 5126dbb8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f,PodSandboxId:0a8448729d58dfd482bd5a49094c2fd4e4ed0e6720a8fb6924487f367ed04675,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722459554233607076,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efc60c19-af1b-426e-82e2
-5fb9a2d1fb3a,},Annotations:map[string]string{io.kubernetes.container.hash: cd476810,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c,PodSandboxId:cf8a88982129cb1c91958a98584e90ab8df7808a358fff0bef4bc8f6e0b68676,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722459549641232498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed232883cfe09c6a025fdae3562ed09d,},Annotations:map[
string]string{io.kubernetes.container.hash: 5a402b30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085,PodSandboxId:9fb5f81259d301fa86a4c90e49c7318058e432e87fe6b7ce38020462786e512a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722459549650587122,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e669529bce979d2f87bc85d9b
56a4f6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447,PodSandboxId:e6e6c2fd49036f8575fa58820d4a20eca5f4b3342399d2530b0a0727071a48db,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722459549583499363,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e21e3a7b3bc1fc9b5bb85bffd07d
f30f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718,PodSandboxId:aedbb71d9cd72849d825f2a5157800099e6ea5357acbd4a8db4c3b9d6c1d969f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722459549565760379,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c778033bc3423b3264c5cb56a14ff
89,},Annotations:map[string]string{io.kubernetes.container.hash: 1dcc80dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=68541e22-02a9-411a-98f3-35fc36350637 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:19:56 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:19:56.629482700Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d0963886-9816-4e91-ac78-158521ea2eac name=/runtime.v1.RuntimeService/Version
	Jul 31 21:19:56 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:19:56.629574759Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d0963886-9816-4e91-ac78-158521ea2eac name=/runtime.v1.RuntimeService/Version
	Jul 31 21:19:56 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:19:56.630970526Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a7883278-0cac-4323-b4d3-1a9f5f612304 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:19:56 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:19:56.631385020Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460796631363785,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a7883278-0cac-4323-b4d3-1a9f5f612304 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:19:56 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:19:56.632030305Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d0f93a58-8df1-4af8-8afc-41d26515e092 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:19:56 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:19:56.632106105Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d0f93a58-8df1-4af8-8afc-41d26515e092 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:19:56 default-k8s-diff-port-125614 crio[730]: time="2024-07-31 21:19:56.632325312Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:79a527efd9238c960ee7781d00091d8e65af2116e40d7d550c8f8d951f23ab0d,PodSandboxId:9bac55b298bd1b804418296dbf8030ce32f98912592975a97abab4ea208339bc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722459564730394018,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5df1bbfb-71e6-41df-a194-4eecaf14017f,},Annotations:map[string]string{io.kubernetes.container.hash: e205fdc1,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025,PodSandboxId:4e4dda22151ab2d0d2a14c28d9ca17e3c1fbc0d14b2fe8f9be498bbaf13f9f38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459562028189819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gnrgs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 203ddf96-11cf-4fd3-8920-aa787815ad1a,},Annotations:map[string]string{io.kubernetes.container.hash: 1ecca4db,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173,PodSandboxId:0a8448729d58dfd482bd5a49094c2fd4e4ed0e6720a8fb6924487f367ed04675,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722459554971858843,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: efc60c19-af1b-426e-82e2-5fb9a2d1fb3a,},Annotations:map[string]string{io.kubernetes.container.hash: cd476810,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e,PodSandboxId:56b1b1a1f978c26a4d8aea2f87a3ca208fcb7144a047f492332d447c822fd6b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722459554287405359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-csdc4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24077c7d-f
54c-4a54-9791-742327f2a9d0,},Annotations:map[string]string{io.kubernetes.container.hash: 5126dbb8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f,PodSandboxId:0a8448729d58dfd482bd5a49094c2fd4e4ed0e6720a8fb6924487f367ed04675,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722459554233607076,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efc60c19-af1b-426e-82e2
-5fb9a2d1fb3a,},Annotations:map[string]string{io.kubernetes.container.hash: cd476810,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c,PodSandboxId:cf8a88982129cb1c91958a98584e90ab8df7808a358fff0bef4bc8f6e0b68676,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722459549641232498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed232883cfe09c6a025fdae3562ed09d,},Annotations:map[
string]string{io.kubernetes.container.hash: 5a402b30,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085,PodSandboxId:9fb5f81259d301fa86a4c90e49c7318058e432e87fe6b7ce38020462786e512a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722459549650587122,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e669529bce979d2f87bc85d9b
56a4f6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447,PodSandboxId:e6e6c2fd49036f8575fa58820d4a20eca5f4b3342399d2530b0a0727071a48db,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722459549583499363,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e21e3a7b3bc1fc9b5bb85bffd07d
f30f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718,PodSandboxId:aedbb71d9cd72849d825f2a5157800099e6ea5357acbd4a8db4c3b9d6c1d969f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722459549565760379,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-125614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c778033bc3423b3264c5cb56a14ff
89,},Annotations:map[string]string{io.kubernetes.container.hash: 1dcc80dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d0f93a58-8df1-4af8-8afc-41d26515e092 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	79a527efd9238       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   9bac55b298bd1       busybox
	987b733bb2bf1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      20 minutes ago      Running             coredns                   1                   4e4dda22151ab       coredns-7db6d8ff4d-gnrgs
	701883982e5a7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       4                   0a8448729d58d       storage-provisioner
	c749bf9fffde8       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      20 minutes ago      Running             kube-proxy                1                   56b1b1a1f978c       kube-proxy-csdc4
	23b4eaaeaafcc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       3                   0a8448729d58d       storage-provisioner
	c578f56929d84       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      20 minutes ago      Running             kube-controller-manager   1                   9fb5f81259d30       kube-controller-manager-default-k8s-diff-port-125614
	d53e71d03f523       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      20 minutes ago      Running             etcd                      1                   cf8a88982129c       etcd-default-k8s-diff-port-125614
	936fe16f8f4b1       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      20 minutes ago      Running             kube-scheduler            1                   e6e6c2fd49036       kube-scheduler-default-k8s-diff-port-125614
	89c6731c9919d       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      20 minutes ago      Running             kube-apiserver            1                   aedbb71d9cd72       kube-apiserver-default-k8s-diff-port-125614
	
	
	==> coredns [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48122 - 28949 "HINFO IN 9147693834618869361.3872042877004081620. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02692053s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-125614
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-125614
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825
	                    minikube.k8s.io/name=default-k8s-diff-port-125614
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T20_51_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 20:51:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-125614
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 21:19:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 21:15:03 +0000   Wed, 31 Jul 2024 20:51:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 21:15:03 +0000   Wed, 31 Jul 2024 20:51:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 21:15:03 +0000   Wed, 31 Jul 2024 20:51:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 21:15:03 +0000   Wed, 31 Jul 2024 20:59:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.221
	  Hostname:    default-k8s-diff-port-125614
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0452ed95624449e1ba8d764eff3412a0
	  System UUID:                0452ed95-6244-49e1-ba8d-764eff3412a0
	  Boot ID:                    11fb6f1d-4681-4ffa-9b18-ac7420edfab8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-7db6d8ff4d-gnrgs                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-default-k8s-diff-port-125614                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-default-k8s-diff-port-125614             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-125614    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-csdc4                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-default-k8s-diff-port-125614             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-569cc877fc-jf52w                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node default-k8s-diff-port-125614 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node default-k8s-diff-port-125614 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node default-k8s-diff-port-125614 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node default-k8s-diff-port-125614 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node default-k8s-diff-port-125614 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     28m                kubelet          Node default-k8s-diff-port-125614 status is now: NodeHasSufficientPID
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeReady                28m                kubelet          Node default-k8s-diff-port-125614 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node default-k8s-diff-port-125614 event: Registered Node default-k8s-diff-port-125614 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node default-k8s-diff-port-125614 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-125614 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-125614 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-125614 event: Registered Node default-k8s-diff-port-125614 in Controller
	
	
	==> dmesg <==
	[Jul31 20:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050768] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041967] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.823661] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.556048] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.358709] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul31 20:59] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.057443] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061366] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.171325] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.146680] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[  +0.289623] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[  +4.781226] systemd-fstab-generator[812]: Ignoring "noauto" option for root device
	[  +0.061647] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.169971] systemd-fstab-generator[935]: Ignoring "noauto" option for root device
	[  +5.604904] kauditd_printk_skb: 97 callbacks suppressed
	[  +1.966337] systemd-fstab-generator[1605]: Ignoring "noauto" option for root device
	[  +3.776235] kauditd_printk_skb: 67 callbacks suppressed
	[  +6.310419] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c] <==
	{"level":"info","ts":"2024-07-31T20:59:11.42928Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T20:59:11.43009Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T20:59:11.430213Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T20:59:11.431561Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T20:59:11.436276Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.221:2379"}
	{"level":"info","ts":"2024-07-31T20:59:52.74758Z","caller":"traceutil/trace.go:171","msg":"trace[2135354521] transaction","detail":"{read_only:false; response_revision:644; number_of_response:1; }","duration":"138.236485ms","start":"2024-07-31T20:59:52.609316Z","end":"2024-07-31T20:59:52.747552Z","steps":["trace[2135354521] 'process raft request'  (duration: 138.100787ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T20:59:53.407384Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"313.653088ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-jf52w\" ","response":"range_response_count:1 size:4293"}
	{"level":"info","ts":"2024-07-31T20:59:53.408163Z","caller":"traceutil/trace.go:171","msg":"trace[1826197538] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-jf52w; range_end:; response_count:1; response_revision:644; }","duration":"314.45402ms","start":"2024-07-31T20:59:53.093642Z","end":"2024-07-31T20:59:53.408096Z","steps":["trace[1826197538] 'range keys from in-memory index tree'  (duration: 313.474144ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T20:59:53.40825Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T20:59:53.09363Z","time spent":"314.599802ms","remote":"127.0.0.1:54982","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4315,"request content":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-jf52w\" "}
	{"level":"info","ts":"2024-07-31T21:09:11.46713Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":871}
	{"level":"info","ts":"2024-07-31T21:09:11.47828Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":871,"took":"10.68118ms","hash":3895722527,"current-db-size-bytes":2744320,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2744320,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-07-31T21:09:11.478397Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3895722527,"revision":871,"compact-revision":-1}
	{"level":"info","ts":"2024-07-31T21:14:11.475253Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1113}
	{"level":"info","ts":"2024-07-31T21:14:11.479808Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1113,"took":"4.192207ms","hash":2096674210,"current-db-size-bytes":2744320,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1638400,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-31T21:14:11.479868Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2096674210,"revision":1113,"compact-revision":871}
	{"level":"warn","ts":"2024-07-31T21:18:39.80909Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.193639ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7552133099815051924 > lease_revoke:<id:68ce910a96f22245>","response":"size:27"}
	{"level":"warn","ts":"2024-07-31T21:18:40.859066Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.961398ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T21:18:40.859205Z","caller":"traceutil/trace.go:171","msg":"trace[439311489] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1575; }","duration":"157.079196ms","start":"2024-07-31T21:18:40.702043Z","end":"2024-07-31T21:18:40.859123Z","steps":["trace[439311489] 'range keys from in-memory index tree'  (duration: 156.874269ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T21:18:42.816841Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.801215ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T21:18:42.816948Z","caller":"traceutil/trace.go:171","msg":"trace[740058280] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1576; }","duration":"114.947848ms","start":"2024-07-31T21:18:42.701988Z","end":"2024-07-31T21:18:42.816936Z","steps":["trace[740058280] 'range keys from in-memory index tree'  (duration: 114.705684ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T21:19:11.482397Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1357}
	{"level":"info","ts":"2024-07-31T21:19:11.486256Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1357,"took":"3.617331ms","hash":2222190657,"current-db-size-bytes":2744320,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1626112,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-31T21:19:11.486312Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2222190657,"revision":1357,"compact-revision":1113}
	{"level":"info","ts":"2024-07-31T21:19:33.099848Z","caller":"traceutil/trace.go:171","msg":"trace[1543714096] transaction","detail":"{read_only:false; response_revision:1618; number_of_response:1; }","duration":"193.966729ms","start":"2024-07-31T21:19:32.90584Z","end":"2024-07-31T21:19:33.099806Z","steps":["trace[1543714096] 'process raft request'  (duration: 193.694853ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T21:19:34.446897Z","caller":"traceutil/trace.go:171","msg":"trace[1547892568] transaction","detail":"{read_only:false; response_revision:1619; number_of_response:1; }","duration":"149.936452ms","start":"2024-07-31T21:19:34.296942Z","end":"2024-07-31T21:19:34.446878Z","steps":["trace[1547892568] 'process raft request'  (duration: 125.218151ms)","trace[1547892568] 'compare'  (duration: 23.43539ms)"],"step_count":2}
	
	
	==> kernel <==
	 21:19:56 up 21 min,  0 users,  load average: 0.23, 0.23, 0.17
	Linux default-k8s-diff-port-125614 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718] <==
	I0731 21:14:13.894032       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:15:13.893445       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:15:13.893739       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 21:15:13.893777       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:15:13.894489       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:15:13.894535       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 21:15:13.895667       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:17:13.894023       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:17:13.894317       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 21:17:13.894358       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:17:13.896291       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:17:13.896317       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 21:17:13.896324       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:19:12.899804       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:19:12.899943       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0731 21:19:13.901116       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:19:13.901252       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 21:19:13.901300       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:19:13.901137       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:19:13.901426       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 21:19:13.903384       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085] <==
	E0731 21:14:26.444536       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:14:26.990360       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:14:56.449594       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:14:56.999529       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 21:15:22.915878       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="1.503603ms"
	E0731 21:15:26.455016       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:15:27.009193       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 21:15:36.910549       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="113.837µs"
	E0731 21:15:56.460611       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:15:57.018359       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:16:26.465468       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:16:27.026322       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:16:56.471823       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:16:57.033905       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:17:26.478020       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:17:27.041303       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:17:56.483459       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:17:57.049391       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:18:26.488425       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:18:27.057129       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:18:56.500365       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:18:57.064662       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:19:26.505421       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:19:27.072921       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:19:56.511244       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	
	
	==> kube-proxy [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e] <==
	I0731 20:59:14.491091       1 server_linux.go:69] "Using iptables proxy"
	I0731 20:59:14.505031       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.221"]
	I0731 20:59:14.583884       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 20:59:14.583990       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 20:59:14.584027       1 server_linux.go:165] "Using iptables Proxier"
	I0731 20:59:14.596004       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 20:59:14.596236       1 server.go:872] "Version info" version="v1.30.3"
	I0731 20:59:14.596393       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 20:59:14.597577       1 config.go:192] "Starting service config controller"
	I0731 20:59:14.597635       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 20:59:14.597748       1 config.go:101] "Starting endpoint slice config controller"
	I0731 20:59:14.597772       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 20:59:14.598245       1 config.go:319] "Starting node config controller"
	I0731 20:59:14.598282       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 20:59:14.698095       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 20:59:14.698155       1 shared_informer.go:320] Caches are synced for service config
	I0731 20:59:14.698422       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447] <==
	I0731 20:59:10.436472       1 serving.go:380] Generated self-signed cert in-memory
	W0731 20:59:12.817886       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 20:59:12.817999       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 20:59:12.818039       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 20:59:12.818069       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 20:59:12.878624       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0731 20:59:12.878830       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 20:59:12.885225       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 20:59:12.885477       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 20:59:12.885527       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 20:59:12.885565       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 20:59:12.985986       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 21:17:08 default-k8s-diff-port-125614 kubelet[942]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:17:08 default-k8s-diff-port-125614 kubelet[942]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:17:12 default-k8s-diff-port-125614 kubelet[942]: E0731 21:17:12.893092     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf52w" podUID="00b07830-8180-43c0-83c7-e68d399ae0ef"
	Jul 31 21:17:27 default-k8s-diff-port-125614 kubelet[942]: E0731 21:17:27.893389     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf52w" podUID="00b07830-8180-43c0-83c7-e68d399ae0ef"
	Jul 31 21:17:39 default-k8s-diff-port-125614 kubelet[942]: E0731 21:17:39.892238     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf52w" podUID="00b07830-8180-43c0-83c7-e68d399ae0ef"
	Jul 31 21:17:51 default-k8s-diff-port-125614 kubelet[942]: E0731 21:17:51.893150     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf52w" podUID="00b07830-8180-43c0-83c7-e68d399ae0ef"
	Jul 31 21:18:05 default-k8s-diff-port-125614 kubelet[942]: E0731 21:18:05.893208     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf52w" podUID="00b07830-8180-43c0-83c7-e68d399ae0ef"
	Jul 31 21:18:08 default-k8s-diff-port-125614 kubelet[942]: E0731 21:18:08.920018     942 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 21:18:08 default-k8s-diff-port-125614 kubelet[942]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:18:08 default-k8s-diff-port-125614 kubelet[942]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:18:08 default-k8s-diff-port-125614 kubelet[942]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:18:08 default-k8s-diff-port-125614 kubelet[942]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:18:17 default-k8s-diff-port-125614 kubelet[942]: E0731 21:18:17.892437     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf52w" podUID="00b07830-8180-43c0-83c7-e68d399ae0ef"
	Jul 31 21:18:31 default-k8s-diff-port-125614 kubelet[942]: E0731 21:18:31.893882     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf52w" podUID="00b07830-8180-43c0-83c7-e68d399ae0ef"
	Jul 31 21:18:45 default-k8s-diff-port-125614 kubelet[942]: E0731 21:18:45.892412     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf52w" podUID="00b07830-8180-43c0-83c7-e68d399ae0ef"
	Jul 31 21:18:57 default-k8s-diff-port-125614 kubelet[942]: E0731 21:18:57.892582     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf52w" podUID="00b07830-8180-43c0-83c7-e68d399ae0ef"
	Jul 31 21:19:08 default-k8s-diff-port-125614 kubelet[942]: E0731 21:19:08.911497     942 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 21:19:08 default-k8s-diff-port-125614 kubelet[942]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:19:08 default-k8s-diff-port-125614 kubelet[942]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:19:08 default-k8s-diff-port-125614 kubelet[942]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:19:08 default-k8s-diff-port-125614 kubelet[942]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:19:11 default-k8s-diff-port-125614 kubelet[942]: E0731 21:19:11.892981     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf52w" podUID="00b07830-8180-43c0-83c7-e68d399ae0ef"
	Jul 31 21:19:26 default-k8s-diff-port-125614 kubelet[942]: E0731 21:19:26.893899     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf52w" podUID="00b07830-8180-43c0-83c7-e68d399ae0ef"
	Jul 31 21:19:41 default-k8s-diff-port-125614 kubelet[942]: E0731 21:19:41.893573     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf52w" podUID="00b07830-8180-43c0-83c7-e68d399ae0ef"
	Jul 31 21:19:56 default-k8s-diff-port-125614 kubelet[942]: E0731 21:19:56.896062     942 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf52w" podUID="00b07830-8180-43c0-83c7-e68d399ae0ef"
	
	
	==> storage-provisioner [23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f] <==
	I0731 20:59:14.347865       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0731 20:59:14.350544       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173] <==
	I0731 20:59:15.083083       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 20:59:15.092570       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 20:59:15.092665       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 20:59:32.500867       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 20:59:32.501648       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-125614_af725880-2b4d-4308-9377-e920a52e7319!
	I0731 20:59:32.502590       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"65a81be9-ded5-45cb-ac18-08638a5bac46", APIVersion:"v1", ResourceVersion:"623", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-125614_af725880-2b4d-4308-9377-e920a52e7319 became leader
	I0731 20:59:32.603649       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-125614_af725880-2b4d-4308-9377-e920a52e7319!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-125614 -n default-k8s-diff-port-125614
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-125614 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-jf52w
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-125614 describe pod metrics-server-569cc877fc-jf52w
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-125614 describe pod metrics-server-569cc877fc-jf52w: exit status 1 (61.506851ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-jf52w" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-125614 describe pod metrics-server-569cc877fc-jf52w: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (435.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (326.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-916885 -n no-preload-916885
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-31 21:18:36.362142651 +0000 UTC m=+6688.612500485
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-916885 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-916885 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.77µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-916885 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-916885 -n no-preload-916885
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-916885 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-916885 logs -n 25: (1.411470124s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p embed-certs-831240                                  | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:50 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| ssh     | -p bridge-341849 sudo                                  | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-341849 sudo find                             | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-341849 sudo crio                             | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-341849                                       | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-248084 | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | disable-driver-mounts-248084                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:51 UTC |
	|         | default-k8s-diff-port-125614                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-831240            | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC | 31 Jul 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-831240                                  | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-916885             | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC | 31 Jul 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-916885                                   | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-125614  | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC | 31 Jul 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC |                     |
	|         | default-k8s-diff-port-125614                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-239115        | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-831240                 | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-831240                                  | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC | 31 Jul 24 21:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-916885                  | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-916885 --memory=2200                     | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 20:54 UTC | 31 Jul 24 21:04 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-125614       | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:54 UTC | 31 Jul 24 21:03 UTC |
	|         | default-k8s-diff-port-125614                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-239115                              | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:55 UTC | 31 Jul 24 20:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-239115             | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:55 UTC | 31 Jul 24 20:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-239115                              | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-239115                              | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 21:18 UTC | 31 Jul 24 21:18 UTC |
	| start   | -p newest-cni-586791 --memory=2200 --alsologtostderr   | newest-cni-586791            | jenkins | v1.33.1 | 31 Jul 24 21:18 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 21:18:07
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 21:18:07.758340  195076 out.go:291] Setting OutFile to fd 1 ...
	I0731 21:18:07.758489  195076 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:18:07.758500  195076 out.go:304] Setting ErrFile to fd 2...
	I0731 21:18:07.758505  195076 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:18:07.758696  195076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 21:18:07.759272  195076 out.go:298] Setting JSON to false
	I0731 21:18:07.760401  195076 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10824,"bootTime":1722449864,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 21:18:07.760459  195076 start.go:139] virtualization: kvm guest
	I0731 21:18:07.762856  195076 out.go:177] * [newest-cni-586791] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 21:18:07.764280  195076 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 21:18:07.764278  195076 notify.go:220] Checking for updates...
	I0731 21:18:07.765882  195076 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 21:18:07.767261  195076 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 21:18:07.768458  195076 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 21:18:07.769912  195076 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 21:18:07.771358  195076 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 21:18:07.773347  195076 config.go:182] Loaded profile config "default-k8s-diff-port-125614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:18:07.773480  195076 config.go:182] Loaded profile config "embed-certs-831240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:18:07.773566  195076 config.go:182] Loaded profile config "no-preload-916885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 21:18:07.773682  195076 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 21:18:07.811669  195076 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 21:18:07.812969  195076 start.go:297] selected driver: kvm2
	I0731 21:18:07.812986  195076 start.go:901] validating driver "kvm2" against <nil>
	I0731 21:18:07.813001  195076 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 21:18:07.814178  195076 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:18:07.814276  195076 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-121704/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 21:18:07.829700  195076 install.go:137] /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0731 21:18:07.829752  195076 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0731 21:18:07.829777  195076 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0731 21:18:07.830068  195076 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0731 21:18:07.830099  195076 cni.go:84] Creating CNI manager for ""
	I0731 21:18:07.830111  195076 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:18:07.830125  195076 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 21:18:07.830202  195076 start.go:340] cluster config:
	{Name:newest-cni-586791 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-586791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:18:07.830341  195076 iso.go:125] acquiring lock: {Name:mk69fc0fe37180b19ce91307b5e12ab2d0bd69fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:18:07.832395  195076 out.go:177] * Starting "newest-cni-586791" primary control-plane node in "newest-cni-586791" cluster
	I0731 21:18:07.833479  195076 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 21:18:07.833520  195076 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0731 21:18:07.833535  195076 cache.go:56] Caching tarball of preloaded images
	I0731 21:18:07.833622  195076 preload.go:172] Found /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 21:18:07.833636  195076 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0731 21:18:07.833740  195076 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/newest-cni-586791/config.json ...
	I0731 21:18:07.833764  195076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/newest-cni-586791/config.json: {Name:mk6cae86d327bc72590c87291dcea071f36d6f2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:18:07.833903  195076 start.go:360] acquireMachinesLock for newest-cni-586791: {Name:mk0ee20c9dba367bb5e62f6affdfe6f589095d2a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 21:18:07.833938  195076 start.go:364] duration metric: took 19.96µs to acquireMachinesLock for "newest-cni-586791"
	I0731 21:18:07.833974  195076 start.go:93] Provisioning new machine with config: &{Name:newest-cni-586791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-586791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:18:07.834035  195076 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 21:18:07.836446  195076 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 21:18:07.836584  195076 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:18:07.836647  195076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:18:07.851076  195076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35623
	I0731 21:18:07.851905  195076 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:18:07.852731  195076 main.go:141] libmachine: Using API Version  1
	I0731 21:18:07.852765  195076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:18:07.853085  195076 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:18:07.853295  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetMachineName
	I0731 21:18:07.853471  195076 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:18:07.853676  195076 start.go:159] libmachine.API.Create for "newest-cni-586791" (driver="kvm2")
	I0731 21:18:07.853721  195076 client.go:168] LocalClient.Create starting
	I0731 21:18:07.853759  195076 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem
	I0731 21:18:07.853799  195076 main.go:141] libmachine: Decoding PEM data...
	I0731 21:18:07.853829  195076 main.go:141] libmachine: Parsing certificate...
	I0731 21:18:07.853910  195076 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem
	I0731 21:18:07.853938  195076 main.go:141] libmachine: Decoding PEM data...
	I0731 21:18:07.853954  195076 main.go:141] libmachine: Parsing certificate...
	I0731 21:18:07.853985  195076 main.go:141] libmachine: Running pre-create checks...
	I0731 21:18:07.853995  195076 main.go:141] libmachine: (newest-cni-586791) Calling .PreCreateCheck
	I0731 21:18:07.854347  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetConfigRaw
	I0731 21:18:07.855015  195076 main.go:141] libmachine: Creating machine...
	I0731 21:18:07.855029  195076 main.go:141] libmachine: (newest-cni-586791) Calling .Create
	I0731 21:18:07.855168  195076 main.go:141] libmachine: (newest-cni-586791) Creating KVM machine...
	I0731 21:18:07.856450  195076 main.go:141] libmachine: (newest-cni-586791) DBG | found existing default KVM network
	I0731 21:18:07.857849  195076 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:18:07.857697  195099 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:c4:8c:a0} reservation:<nil>}
	I0731 21:18:07.858767  195076 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:18:07.858694  195099 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:05:5b:38} reservation:<nil>}
	I0731 21:18:07.859945  195076 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:18:07.859879  195099 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00030eba0}
	I0731 21:18:07.859997  195076 main.go:141] libmachine: (newest-cni-586791) DBG | created network xml: 
	I0731 21:18:07.860019  195076 main.go:141] libmachine: (newest-cni-586791) DBG | <network>
	I0731 21:18:07.860030  195076 main.go:141] libmachine: (newest-cni-586791) DBG |   <name>mk-newest-cni-586791</name>
	I0731 21:18:07.860040  195076 main.go:141] libmachine: (newest-cni-586791) DBG |   <dns enable='no'/>
	I0731 21:18:07.860050  195076 main.go:141] libmachine: (newest-cni-586791) DBG |   
	I0731 21:18:07.860061  195076 main.go:141] libmachine: (newest-cni-586791) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0731 21:18:07.860095  195076 main.go:141] libmachine: (newest-cni-586791) DBG |     <dhcp>
	I0731 21:18:07.860114  195076 main.go:141] libmachine: (newest-cni-586791) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0731 21:18:07.860126  195076 main.go:141] libmachine: (newest-cni-586791) DBG |     </dhcp>
	I0731 21:18:07.860133  195076 main.go:141] libmachine: (newest-cni-586791) DBG |   </ip>
	I0731 21:18:07.860144  195076 main.go:141] libmachine: (newest-cni-586791) DBG |   
	I0731 21:18:07.860154  195076 main.go:141] libmachine: (newest-cni-586791) DBG | </network>
	I0731 21:18:07.860164  195076 main.go:141] libmachine: (newest-cni-586791) DBG | 
	I0731 21:18:07.865260  195076 main.go:141] libmachine: (newest-cni-586791) DBG | trying to create private KVM network mk-newest-cni-586791 192.168.61.0/24...
	I0731 21:18:07.938348  195076 main.go:141] libmachine: (newest-cni-586791) DBG | private KVM network mk-newest-cni-586791 192.168.61.0/24 created
	I0731 21:18:07.938398  195076 main.go:141] libmachine: (newest-cni-586791) Setting up store path in /home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791 ...
	I0731 21:18:07.938425  195076 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:18:07.938310  195099 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 21:18:07.938455  195076 main.go:141] libmachine: (newest-cni-586791) Building disk image from file:///home/jenkins/minikube-integration/19355-121704/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0731 21:18:07.938477  195076 main.go:141] libmachine: (newest-cni-586791) Downloading /home/jenkins/minikube-integration/19355-121704/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19355-121704/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0731 21:18:08.235688  195076 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:18:08.235557  195099 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791/id_rsa...
	I0731 21:18:08.366936  195076 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:18:08.366792  195099 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791/newest-cni-586791.rawdisk...
	I0731 21:18:08.366973  195076 main.go:141] libmachine: (newest-cni-586791) DBG | Writing magic tar header
	I0731 21:18:08.366984  195076 main.go:141] libmachine: (newest-cni-586791) DBG | Writing SSH key tar header
	I0731 21:18:08.366993  195076 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:18:08.366936  195099 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791 ...
	I0731 21:18:08.367040  195076 main.go:141] libmachine: (newest-cni-586791) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791
	I0731 21:18:08.367100  195076 main.go:141] libmachine: (newest-cni-586791) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791 (perms=drwx------)
	I0731 21:18:08.367131  195076 main.go:141] libmachine: (newest-cni-586791) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704/.minikube/machines (perms=drwxr-xr-x)
	I0731 21:18:08.367148  195076 main.go:141] libmachine: (newest-cni-586791) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704/.minikube/machines
	I0731 21:18:08.367179  195076 main.go:141] libmachine: (newest-cni-586791) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704/.minikube (perms=drwxr-xr-x)
	I0731 21:18:08.367229  195076 main.go:141] libmachine: (newest-cni-586791) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 21:18:08.367244  195076 main.go:141] libmachine: (newest-cni-586791) Setting executable bit set on /home/jenkins/minikube-integration/19355-121704 (perms=drwxrwxr-x)
	I0731 21:18:08.367260  195076 main.go:141] libmachine: (newest-cni-586791) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 21:18:08.367274  195076 main.go:141] libmachine: (newest-cni-586791) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 21:18:08.367290  195076 main.go:141] libmachine: (newest-cni-586791) Creating domain...
	I0731 21:18:08.367306  195076 main.go:141] libmachine: (newest-cni-586791) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-121704
	I0731 21:18:08.367324  195076 main.go:141] libmachine: (newest-cni-586791) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 21:18:08.367426  195076 main.go:141] libmachine: (newest-cni-586791) DBG | Checking permissions on dir: /home/jenkins
	I0731 21:18:08.367466  195076 main.go:141] libmachine: (newest-cni-586791) DBG | Checking permissions on dir: /home
	I0731 21:18:08.367480  195076 main.go:141] libmachine: (newest-cni-586791) DBG | Skipping /home - not owner
	I0731 21:18:08.368298  195076 main.go:141] libmachine: (newest-cni-586791) define libvirt domain using xml: 
	I0731 21:18:08.368315  195076 main.go:141] libmachine: (newest-cni-586791) <domain type='kvm'>
	I0731 21:18:08.368324  195076 main.go:141] libmachine: (newest-cni-586791)   <name>newest-cni-586791</name>
	I0731 21:18:08.368332  195076 main.go:141] libmachine: (newest-cni-586791)   <memory unit='MiB'>2200</memory>
	I0731 21:18:08.368342  195076 main.go:141] libmachine: (newest-cni-586791)   <vcpu>2</vcpu>
	I0731 21:18:08.368351  195076 main.go:141] libmachine: (newest-cni-586791)   <features>
	I0731 21:18:08.368356  195076 main.go:141] libmachine: (newest-cni-586791)     <acpi/>
	I0731 21:18:08.368368  195076 main.go:141] libmachine: (newest-cni-586791)     <apic/>
	I0731 21:18:08.368373  195076 main.go:141] libmachine: (newest-cni-586791)     <pae/>
	I0731 21:18:08.368378  195076 main.go:141] libmachine: (newest-cni-586791)     
	I0731 21:18:08.368385  195076 main.go:141] libmachine: (newest-cni-586791)   </features>
	I0731 21:18:08.368392  195076 main.go:141] libmachine: (newest-cni-586791)   <cpu mode='host-passthrough'>
	I0731 21:18:08.368401  195076 main.go:141] libmachine: (newest-cni-586791)   
	I0731 21:18:08.368410  195076 main.go:141] libmachine: (newest-cni-586791)   </cpu>
	I0731 21:18:08.368418  195076 main.go:141] libmachine: (newest-cni-586791)   <os>
	I0731 21:18:08.368432  195076 main.go:141] libmachine: (newest-cni-586791)     <type>hvm</type>
	I0731 21:18:08.368445  195076 main.go:141] libmachine: (newest-cni-586791)     <boot dev='cdrom'/>
	I0731 21:18:08.368453  195076 main.go:141] libmachine: (newest-cni-586791)     <boot dev='hd'/>
	I0731 21:18:08.368459  195076 main.go:141] libmachine: (newest-cni-586791)     <bootmenu enable='no'/>
	I0731 21:18:08.368464  195076 main.go:141] libmachine: (newest-cni-586791)   </os>
	I0731 21:18:08.368469  195076 main.go:141] libmachine: (newest-cni-586791)   <devices>
	I0731 21:18:08.368478  195076 main.go:141] libmachine: (newest-cni-586791)     <disk type='file' device='cdrom'>
	I0731 21:18:08.368490  195076 main.go:141] libmachine: (newest-cni-586791)       <source file='/home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791/boot2docker.iso'/>
	I0731 21:18:08.368510  195076 main.go:141] libmachine: (newest-cni-586791)       <target dev='hdc' bus='scsi'/>
	I0731 21:18:08.368521  195076 main.go:141] libmachine: (newest-cni-586791)       <readonly/>
	I0731 21:18:08.368528  195076 main.go:141] libmachine: (newest-cni-586791)     </disk>
	I0731 21:18:08.368540  195076 main.go:141] libmachine: (newest-cni-586791)     <disk type='file' device='disk'>
	I0731 21:18:08.368555  195076 main.go:141] libmachine: (newest-cni-586791)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 21:18:08.368574  195076 main.go:141] libmachine: (newest-cni-586791)       <source file='/home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791/newest-cni-586791.rawdisk'/>
	I0731 21:18:08.368589  195076 main.go:141] libmachine: (newest-cni-586791)       <target dev='hda' bus='virtio'/>
	I0731 21:18:08.368601  195076 main.go:141] libmachine: (newest-cni-586791)     </disk>
	I0731 21:18:08.368619  195076 main.go:141] libmachine: (newest-cni-586791)     <interface type='network'>
	I0731 21:18:08.368632  195076 main.go:141] libmachine: (newest-cni-586791)       <source network='mk-newest-cni-586791'/>
	I0731 21:18:08.368644  195076 main.go:141] libmachine: (newest-cni-586791)       <model type='virtio'/>
	I0731 21:18:08.368650  195076 main.go:141] libmachine: (newest-cni-586791)     </interface>
	I0731 21:18:08.368659  195076 main.go:141] libmachine: (newest-cni-586791)     <interface type='network'>
	I0731 21:18:08.368666  195076 main.go:141] libmachine: (newest-cni-586791)       <source network='default'/>
	I0731 21:18:08.368671  195076 main.go:141] libmachine: (newest-cni-586791)       <model type='virtio'/>
	I0731 21:18:08.368678  195076 main.go:141] libmachine: (newest-cni-586791)     </interface>
	I0731 21:18:08.368683  195076 main.go:141] libmachine: (newest-cni-586791)     <serial type='pty'>
	I0731 21:18:08.368690  195076 main.go:141] libmachine: (newest-cni-586791)       <target port='0'/>
	I0731 21:18:08.368695  195076 main.go:141] libmachine: (newest-cni-586791)     </serial>
	I0731 21:18:08.368701  195076 main.go:141] libmachine: (newest-cni-586791)     <console type='pty'>
	I0731 21:18:08.368705  195076 main.go:141] libmachine: (newest-cni-586791)       <target type='serial' port='0'/>
	I0731 21:18:08.368713  195076 main.go:141] libmachine: (newest-cni-586791)     </console>
	I0731 21:18:08.368717  195076 main.go:141] libmachine: (newest-cni-586791)     <rng model='virtio'>
	I0731 21:18:08.368729  195076 main.go:141] libmachine: (newest-cni-586791)       <backend model='random'>/dev/random</backend>
	I0731 21:18:08.368738  195076 main.go:141] libmachine: (newest-cni-586791)     </rng>
	I0731 21:18:08.368746  195076 main.go:141] libmachine: (newest-cni-586791)     
	I0731 21:18:08.368755  195076 main.go:141] libmachine: (newest-cni-586791)     
	I0731 21:18:08.368763  195076 main.go:141] libmachine: (newest-cni-586791)   </devices>
	I0731 21:18:08.368773  195076 main.go:141] libmachine: (newest-cni-586791) </domain>
	I0731 21:18:08.368782  195076 main.go:141] libmachine: (newest-cni-586791) 
	I0731 21:18:08.373192  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:57:e5:37 in network default
	I0731 21:18:08.373745  195076 main.go:141] libmachine: (newest-cni-586791) Ensuring networks are active...
	I0731 21:18:08.373769  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:08.374380  195076 main.go:141] libmachine: (newest-cni-586791) Ensuring network default is active
	I0731 21:18:08.374654  195076 main.go:141] libmachine: (newest-cni-586791) Ensuring network mk-newest-cni-586791 is active
	I0731 21:18:08.375152  195076 main.go:141] libmachine: (newest-cni-586791) Getting domain xml...
	I0731 21:18:08.375815  195076 main.go:141] libmachine: (newest-cni-586791) Creating domain...
	I0731 21:18:09.660469  195076 main.go:141] libmachine: (newest-cni-586791) Waiting to get IP...
	I0731 21:18:09.661495  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:09.661953  195076 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:18:09.662005  195076 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:18:09.661942  195099 retry.go:31] will retry after 290.117081ms: waiting for machine to come up
	I0731 21:18:09.953512  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:09.954009  195076 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:18:09.954032  195076 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:18:09.953969  195099 retry.go:31] will retry after 372.930886ms: waiting for machine to come up
	I0731 21:18:10.328496  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:10.328999  195076 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:18:10.329028  195076 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:18:10.328954  195099 retry.go:31] will retry after 486.773236ms: waiting for machine to come up
	I0731 21:18:10.817701  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:10.818199  195076 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:18:10.818225  195076 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:18:10.818162  195099 retry.go:31] will retry after 391.05906ms: waiting for machine to come up
	I0731 21:18:11.210567  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:11.210993  195076 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:18:11.211022  195076 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:18:11.210940  195099 retry.go:31] will retry after 555.248552ms: waiting for machine to come up
	I0731 21:18:11.767816  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:11.768240  195076 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:18:11.768272  195076 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:18:11.768191  195099 retry.go:31] will retry after 923.001021ms: waiting for machine to come up
	I0731 21:18:12.693394  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:12.693754  195076 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:18:12.693778  195076 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:18:12.693705  195099 retry.go:31] will retry after 1.131838078s: waiting for machine to come up
	I0731 21:18:13.827615  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:13.828037  195076 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:18:13.828068  195076 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:18:13.827974  195099 retry.go:31] will retry after 1.230499434s: waiting for machine to come up
	I0731 21:18:15.060335  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:15.060748  195076 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:18:15.060772  195076 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:18:15.060714  195099 retry.go:31] will retry after 1.51700058s: waiting for machine to come up
	I0731 21:18:16.579860  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:16.580276  195076 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:18:16.580298  195076 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:18:16.580217  195099 retry.go:31] will retry after 2.005475272s: waiting for machine to come up
	I0731 21:18:18.587838  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:18.588368  195076 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:18:18.588403  195076 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:18:18.588299  195099 retry.go:31] will retry after 2.007548566s: waiting for machine to come up
	I0731 21:18:20.598472  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:20.598885  195076 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:18:20.598911  195076 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:18:20.598852  195099 retry.go:31] will retry after 3.217549081s: waiting for machine to come up
	I0731 21:18:23.817479  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:23.817909  195076 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:18:23.817949  195076 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:18:23.817898  195099 retry.go:31] will retry after 3.174561607s: waiting for machine to come up
	I0731 21:18:26.994576  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:26.995032  195076 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:18:26.995054  195076 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:18:26.994993  195099 retry.go:31] will retry after 4.660786705s: waiting for machine to come up
	I0731 21:18:31.658347  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:31.658798  195076 main.go:141] libmachine: (newest-cni-586791) Found IP for machine: 192.168.61.136
	I0731 21:18:31.658848  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has current primary IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:31.658860  195076 main.go:141] libmachine: (newest-cni-586791) Reserving static IP address...
	I0731 21:18:31.659217  195076 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find host DHCP lease matching {name: "newest-cni-586791", mac: "52:54:00:c5:e4:c3", ip: "192.168.61.136"} in network mk-newest-cni-586791
	I0731 21:18:31.735730  195076 main.go:141] libmachine: (newest-cni-586791) DBG | Getting to WaitForSSH function...
	I0731 21:18:31.735763  195076 main.go:141] libmachine: (newest-cni-586791) Reserved static IP address: 192.168.61.136
	I0731 21:18:31.735778  195076 main.go:141] libmachine: (newest-cni-586791) Waiting for SSH to be available...
	I0731 21:18:31.738537  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:31.739061  195076 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:22 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:18:31.739091  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:31.739184  195076 main.go:141] libmachine: (newest-cni-586791) DBG | Using SSH client type: external
	I0731 21:18:31.739205  195076 main.go:141] libmachine: (newest-cni-586791) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791/id_rsa (-rw-------)
	I0731 21:18:31.739248  195076 main.go:141] libmachine: (newest-cni-586791) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:18:31.739272  195076 main.go:141] libmachine: (newest-cni-586791) DBG | About to run SSH command:
	I0731 21:18:31.739284  195076 main.go:141] libmachine: (newest-cni-586791) DBG | exit 0
	I0731 21:18:31.869600  195076 main.go:141] libmachine: (newest-cni-586791) DBG | SSH cmd err, output: <nil>: 
	I0731 21:18:31.869973  195076 main.go:141] libmachine: (newest-cni-586791) KVM machine creation complete!
	I0731 21:18:31.870258  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetConfigRaw
	I0731 21:18:31.870917  195076 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:18:31.871131  195076 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:18:31.871272  195076 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 21:18:31.871287  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetState
	I0731 21:18:31.872778  195076 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 21:18:31.872803  195076 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 21:18:31.872808  195076 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 21:18:31.872817  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:18:31.875275  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:31.875596  195076 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:22 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:18:31.875616  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:31.875803  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:18:31.875975  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:18:31.876147  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:18:31.876306  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:18:31.876495  195076 main.go:141] libmachine: Using SSH client type: native
	I0731 21:18:31.876703  195076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0731 21:18:31.876715  195076 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 21:18:31.984622  195076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:18:31.984650  195076 main.go:141] libmachine: Detecting the provisioner...
	I0731 21:18:31.984658  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:18:31.987430  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:31.987788  195076 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:22 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:18:31.987813  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:31.987962  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:18:31.988174  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:18:31.988329  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:18:31.988507  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:18:31.988764  195076 main.go:141] libmachine: Using SSH client type: native
	I0731 21:18:31.988945  195076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0731 21:18:31.988956  195076 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 21:18:32.098025  195076 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 21:18:32.098110  195076 main.go:141] libmachine: found compatible host: buildroot
	I0731 21:18:32.098119  195076 main.go:141] libmachine: Provisioning with buildroot...
	I0731 21:18:32.098127  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetMachineName
	I0731 21:18:32.098358  195076 buildroot.go:166] provisioning hostname "newest-cni-586791"
	I0731 21:18:32.098372  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetMachineName
	I0731 21:18:32.098612  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:18:32.101711  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:32.102143  195076 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:22 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:18:32.102169  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:32.102342  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:18:32.102555  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:18:32.102725  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:18:32.102880  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:18:32.103114  195076 main.go:141] libmachine: Using SSH client type: native
	I0731 21:18:32.103267  195076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0731 21:18:32.103278  195076 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-586791 && echo "newest-cni-586791" | sudo tee /etc/hostname
	I0731 21:18:32.229695  195076 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-586791
	
	I0731 21:18:32.229725  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:18:32.232931  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:32.233318  195076 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:22 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:18:32.233381  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:32.233546  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:18:32.233771  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:18:32.233948  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:18:32.234098  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:18:32.234278  195076 main.go:141] libmachine: Using SSH client type: native
	I0731 21:18:32.234500  195076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0731 21:18:32.234527  195076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-586791' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-586791/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-586791' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:18:32.350974  195076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:18:32.351001  195076 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 21:18:32.351069  195076 buildroot.go:174] setting up certificates
	I0731 21:18:32.351090  195076 provision.go:84] configureAuth start
	I0731 21:18:32.351105  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetMachineName
	I0731 21:18:32.351381  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetIP
	I0731 21:18:32.354267  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:32.354586  195076 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:22 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:18:32.354619  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:32.354778  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:18:32.357274  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:32.357624  195076 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:22 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:18:32.357650  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:32.357811  195076 provision.go:143] copyHostCerts
	I0731 21:18:32.357871  195076 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 21:18:32.357885  195076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 21:18:32.357969  195076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 21:18:32.358155  195076 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 21:18:32.358168  195076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 21:18:32.358206  195076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 21:18:32.358282  195076 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 21:18:32.358292  195076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 21:18:32.358324  195076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 21:18:32.358387  195076 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.newest-cni-586791 san=[127.0.0.1 192.168.61.136 localhost minikube newest-cni-586791]
	I0731 21:18:32.431646  195076 provision.go:177] copyRemoteCerts
	I0731 21:18:32.431718  195076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:18:32.431750  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:18:32.434313  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:32.434653  195076 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:22 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:18:32.434683  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:32.434901  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:18:32.435096  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:18:32.435267  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:18:32.435460  195076 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791/id_rsa Username:docker}
	I0731 21:18:32.520102  195076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:18:32.544564  195076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 21:18:32.571077  195076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 21:18:32.595084  195076 provision.go:87] duration metric: took 243.975646ms to configureAuth
	I0731 21:18:32.595119  195076 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:18:32.595311  195076 config.go:182] Loaded profile config "newest-cni-586791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 21:18:32.595415  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:18:32.597993  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:32.598356  195076 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:22 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:18:32.598380  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:32.598601  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:18:32.598812  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:18:32.599043  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:18:32.599182  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:18:32.599361  195076 main.go:141] libmachine: Using SSH client type: native
	I0731 21:18:32.599588  195076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0731 21:18:32.599612  195076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:18:32.882078  195076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:18:32.882106  195076 main.go:141] libmachine: Checking connection to Docker...
	I0731 21:18:32.882116  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetURL
	I0731 21:18:32.883494  195076 main.go:141] libmachine: (newest-cni-586791) DBG | Using libvirt version 6000000
	I0731 21:18:32.885719  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:32.886039  195076 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:22 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:18:32.886060  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:32.886249  195076 main.go:141] libmachine: Docker is up and running!
	I0731 21:18:32.886265  195076 main.go:141] libmachine: Reticulating splines...
	I0731 21:18:32.886272  195076 client.go:171] duration metric: took 25.032540789s to LocalClient.Create
	I0731 21:18:32.886296  195076 start.go:167] duration metric: took 25.032624422s to libmachine.API.Create "newest-cni-586791"
	I0731 21:18:32.886306  195076 start.go:293] postStartSetup for "newest-cni-586791" (driver="kvm2")
	I0731 21:18:32.886319  195076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:18:32.886335  195076 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:18:32.886633  195076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:18:32.886666  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:18:32.889103  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:32.889464  195076 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:22 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:18:32.889489  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:32.889666  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:18:32.889859  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:18:32.890036  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:18:32.890173  195076 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791/id_rsa Username:docker}
	I0731 21:18:32.976219  195076 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:18:32.981066  195076 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:18:32.981092  195076 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 21:18:32.981164  195076 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 21:18:32.981269  195076 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 21:18:32.981410  195076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:18:32.991099  195076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 21:18:33.016031  195076 start.go:296] duration metric: took 129.70874ms for postStartSetup
	I0731 21:18:33.016094  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetConfigRaw
	I0731 21:18:33.016795  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetIP
	I0731 21:18:33.019446  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:33.019848  195076 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:22 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:18:33.019871  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:33.020104  195076 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/newest-cni-586791/config.json ...
	I0731 21:18:33.020343  195076 start.go:128] duration metric: took 25.186296419s to createHost
	I0731 21:18:33.020374  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:18:33.022482  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:33.022818  195076 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:22 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:18:33.022844  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:33.022951  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:18:33.023187  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:18:33.023371  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:18:33.023556  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:18:33.023781  195076 main.go:141] libmachine: Using SSH client type: native
	I0731 21:18:33.024100  195076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0731 21:18:33.024128  195076 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:18:33.134789  195076 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722460713.092689679
	
	I0731 21:18:33.134815  195076 fix.go:216] guest clock: 1722460713.092689679
	I0731 21:18:33.134825  195076 fix.go:229] Guest: 2024-07-31 21:18:33.092689679 +0000 UTC Remote: 2024-07-31 21:18:33.020358326 +0000 UTC m=+25.298505285 (delta=72.331353ms)
	I0731 21:18:33.134851  195076 fix.go:200] guest clock delta is within tolerance: 72.331353ms
	I0731 21:18:33.134862  195076 start.go:83] releasing machines lock for "newest-cni-586791", held for 25.300914169s
	I0731 21:18:33.134888  195076 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:18:33.135165  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetIP
	I0731 21:18:33.137873  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:33.138186  195076 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:22 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:18:33.138234  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:33.138342  195076 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:18:33.138849  195076 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:18:33.139068  195076 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:18:33.139183  195076 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:18:33.139234  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:18:33.139325  195076 ssh_runner.go:195] Run: cat /version.json
	I0731 21:18:33.139366  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:18:33.141932  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:33.142027  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:33.142339  195076 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:22 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:18:33.142369  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:33.142410  195076 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:22 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:18:33.142429  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:33.142526  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:18:33.142552  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:18:33.142695  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:18:33.142746  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:18:33.142840  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:18:33.143042  195076 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791/id_rsa Username:docker}
	I0731 21:18:33.143112  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:18:33.143294  195076 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791/id_rsa Username:docker}
	I0731 21:18:33.243018  195076 ssh_runner.go:195] Run: systemctl --version
	I0731 21:18:33.249111  195076 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:18:33.414180  195076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:18:33.420334  195076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:18:33.420400  195076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:18:33.437268  195076 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:18:33.437295  195076 start.go:495] detecting cgroup driver to use...
	I0731 21:18:33.437403  195076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:18:33.456045  195076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:18:33.470693  195076 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:18:33.470767  195076 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:18:33.484290  195076 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:18:33.497952  195076 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:18:33.620236  195076 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:18:33.780495  195076 docker.go:233] disabling docker service ...
	I0731 21:18:33.780569  195076 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:18:33.796983  195076 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:18:33.810424  195076 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:18:33.942578  195076 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:18:34.055709  195076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:18:34.070112  195076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:18:34.088786  195076 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0731 21:18:34.088851  195076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:18:34.100031  195076 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:18:34.100094  195076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:18:34.110937  195076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:18:34.121824  195076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:18:34.132231  195076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:18:34.143668  195076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:18:34.154349  195076 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:18:34.171839  195076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:18:34.182087  195076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:18:34.191364  195076 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:18:34.191424  195076 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:18:34.206233  195076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:18:34.216112  195076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:18:34.358677  195076 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:18:34.503446  195076 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:18:34.503521  195076 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:18:34.508702  195076 start.go:563] Will wait 60s for crictl version
	I0731 21:18:34.508764  195076 ssh_runner.go:195] Run: which crictl
	I0731 21:18:34.512491  195076 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:18:34.553688  195076 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:18:34.553805  195076 ssh_runner.go:195] Run: crio --version
	I0731 21:18:34.583706  195076 ssh_runner.go:195] Run: crio --version
	I0731 21:18:34.618010  195076 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0731 21:18:34.619317  195076 main.go:141] libmachine: (newest-cni-586791) Calling .GetIP
	I0731 21:18:34.622146  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:34.622577  195076 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:18:22 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:18:34.622606  195076 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:18:34.622843  195076 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0731 21:18:34.626826  195076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:18:34.640863  195076 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	
	
	==> CRI-O <==
	Jul 31 21:18:37 no-preload-916885 crio[724]: time="2024-07-31 21:18:37.069762426Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6f50b87f8d38e0669b501d0fe348820500f954a6232648339095dfef3e528fcc,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-bqgfg,Uid:9010990b-36d5-4c0d-adc9-5d9483bd5d44,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722459839271398020,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-bqgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9010990b-36d5-4c0d-adc9-5d9483bd5d44,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T21:03:58.943326262Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:685e4d79b333b85663a9cf9b0fa403094552066244f0e300fbbbb075aea29b93,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-9qnjq,Uid:2350f15d-0e3d-429f-a21f-8cbd41407d7e,Namespace:kube-sy
stem,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722459839252147364,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-9qnjq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2350f15d-0e3d-429f-a21f-8cbd41407d7e,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T21:03:58.937656886Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a96988872910546170191884cc4fec9d3f84727178e25efac08ebe55d85a7216,Metadata:&PodSandboxMetadata{Name:metrics-server-78fcd8795b-86m8h,Uid:3c4df12a-3d52-48dc-9998-587565d13dca,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722459838870753230,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-78fcd8795b-86m8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c4df12a-3d52-48dc-9998-587565d13dca,k8s-app: metrics-server,pod-template-hash: 78fcd8795b,},Annotations:ma
p[string]string{kubernetes.io/config.seen: 2024-07-31T21:03:58.561154528Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c53427f450026f0d3335d9b48da02391ebc00f95b5c1efe8ade865bac6db4af0,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:6bfc781b-1370-4460-8018-a1279e37b39d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722459838813097924,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bfc781b-1370-4460-8018-a1279e37b39d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[
{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-31T21:03:58.498868764Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:616f0efde70f8c7adfb51dd6e6975a6c9ae8b56b6a7e5fa24af54729e5d42a94,Metadata:&PodSandboxMetadata{Name:kube-proxy-b4h2z,Uid:328ebd98-accf-43da-ae60-40fc93f34116,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722459837766033178,Labels:map[string]string{controller-revision-hash: 6558c48888,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-b4h2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 328ebd98-accf-43da-ae60-40fc93f34116,k8s-app: kube-proxy,pod-temp
late-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T21:03:57.436054291Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:25090de3a63b61aa61459b0e51e8db808fc8a9ef37c479e4d7b0ea913f589128,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-916885,Uid:39a7cebf4279e5eab84727743fe9b711,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722459826295878063,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39a7cebf4279e5eab84727743fe9b711,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.239:8443,kubernetes.io/config.hash: 39a7cebf4279e5eab84727743fe9b711,kubernetes.io/config.seen: 2024-07-31T21:03:45.825617343Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3275ffc9c3ba372d2a5bd99eb803e16
5221016c4e39eb025a82c9c76b937251e,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-916885,Uid:077a0ac9e1a343879e95368a267db6cd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722459826291735629,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077a0ac9e1a343879e95368a267db6cd,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 077a0ac9e1a343879e95368a267db6cd,kubernetes.io/config.seen: 2024-07-31T21:03:45.825619305Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9cb5caa5f60eec7ff5fcdafb0b064ad85dcc04a9d1b371989e32786d1cdde540,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-916885,Uid:4b005f25a852b06b06ff5498175ec2f7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722459826285438126,Labels:map[string]string{component: kube-sch
eduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b005f25a852b06b06ff5498175ec2f7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4b005f25a852b06b06ff5498175ec2f7,kubernetes.io/config.seen: 2024-07-31T21:03:45.825621156Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5b2fc40540948899d1e8a477dcc86d7a1aba1ed9ec66c35877477f38d707c5f3,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-916885,Uid:df9db57d18dc788fa09a42bf2fd340c3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722459826271602132,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df9db57d18dc788fa09a42bf2fd340c3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.239:237
9,kubernetes.io/config.hash: df9db57d18dc788fa09a42bf2fd340c3,kubernetes.io/config.seen: 2024-07-31T21:03:45.825612671Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fc5a36a16cf030507bf30af46d5629e2163912b58f508378b5a7f67564e725a6,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-916885,Uid:39a7cebf4279e5eab84727743fe9b711,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722459539554981354,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39a7cebf4279e5eab84727743fe9b711,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.239:8443,kubernetes.io/config.hash: 39a7cebf4279e5eab84727743fe9b711,kubernetes.io/config.seen: 2024-07-31T20:58:59.037373504Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/inter
ceptors.go:74" id=026ef2e8-9b93-4205-afea-a5cf5fbfbf90 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 21:18:37 no-preload-916885 crio[724]: time="2024-07-31 21:18:37.070395419Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f0df7bca-8b8b-4422-b11a-ade388dc539c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:18:37 no-preload-916885 crio[724]: time="2024-07-31 21:18:37.070517080Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f0df7bca-8b8b-4422-b11a-ade388dc539c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:18:37 no-preload-916885 crio[724]: time="2024-07-31 21:18:37.070798973Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f9e2e1cd7228207faa2ff002f8cc03a9d98f4b388fd71a1a40ca232a76f4a0,PodSandboxId:6f50b87f8d38e0669b501d0fe348820500f954a6232648339095dfef3e528fcc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459839624805026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bqgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9010990b-36d5-4c0d-adc9-5d9483bd5d44,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2db39be0da84703a46cef6c35ba354293c592070c9cc009a65504d253aa91b51,PodSandboxId:685e4d79b333b85663a9cf9b0fa403094552066244f0e300fbbbb075aea29b93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459839583366097,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-9qnjq,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2350f15d-0e3d-429f-a21f-8cbd41407d7e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c67c2b844ed877679873aa317c98c7e636bb9c9d0f42ada12a2f9388996b9fec,PodSandboxId:c53427f450026f0d3335d9b48da02391ebc00f95b5c1efe8ade865bac6db4af0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1722459839078526963,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bfc781b-1370-4460-8018-a1279e37b39d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f2f77153e80ecfd09afe39cb5e448010a1de1a1fca2bc54834e055477f5c11,PodSandboxId:616f0efde70f8c7adfb51dd6e6975a6c9ae8b56b6a7e5fa24af54729e5d42a94,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1722459838069264460,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4h2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 328ebd98-accf-43da-ae60-40fc93f34116,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21f0dca410fa197640a5a8e99f24dd152c79027eaa2d252767d3690691a6042,PodSandboxId:5b2fc40540948899d1e8a477dcc86d7a1aba1ed9ec66c35877477f38d707c5f3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722459826550983146,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df9db57d18dc788fa09a42bf2fd340c3,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f5e4d14b86360dbe264196f7a74b632354f25386ad11233eb96c6a134d77959,PodSandboxId:25090de3a63b61aa61459b0e51e8db808fc8a9ef37c479e4d7b0ea913f589128,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722459826548105160,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39a7cebf4279e5eab84727743fe9b711,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c90a9147c6edc05d39c3f78f5d2597a437b773c93a1324f79a2faa7ed03aa9,PodSandboxId:3275ffc9c3ba372d2a5bd99eb803e165221016c4e39eb025a82c9c76b937251e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722459826487201402,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077a0ac9e1a343879e95368a267db6cd,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d924058498dcb3735758da513f70d60e4974b432aa6434aa13593b6ff22d360,PodSandboxId:9cb5caa5f60eec7ff5fcdafb0b064ad85dcc04a9d1b371989e32786d1cdde540,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722459826489063733,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b005f25a852b06b06ff5498175ec2f7,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3169bc1dd903b3e599effab3c233eb4cbd4b31468090864ee6f8909cf2635b0,PodSandboxId:fc5a36a16cf030507bf30af46d5629e2163912b58f508378b5a7f67564e725a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722459539719672412,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39a7cebf4279e5eab84727743fe9b711,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f0df7bca-8b8b-4422-b11a-ade388dc539c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:18:37 no-preload-916885 crio[724]: time="2024-07-31 21:18:37.085300779Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7508ccf1-84ed-4ba2-bacd-bd1b8997936c name=/runtime.v1.RuntimeService/Version
	Jul 31 21:18:37 no-preload-916885 crio[724]: time="2024-07-31 21:18:37.085394340Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7508ccf1-84ed-4ba2-bacd-bd1b8997936c name=/runtime.v1.RuntimeService/Version
	Jul 31 21:18:37 no-preload-916885 crio[724]: time="2024-07-31 21:18:37.086341529Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8f4a606f-e48c-4a0b-a1cc-0f291e215ec1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:18:37 no-preload-916885 crio[724]: time="2024-07-31 21:18:37.086798857Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460717086775115,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8f4a606f-e48c-4a0b-a1cc-0f291e215ec1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:18:37 no-preload-916885 crio[724]: time="2024-07-31 21:18:37.087240580Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=43e43be0-db8c-4ca3-86c6-f844f7aeaca8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:18:37 no-preload-916885 crio[724]: time="2024-07-31 21:18:37.087311134Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=43e43be0-db8c-4ca3-86c6-f844f7aeaca8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:18:37 no-preload-916885 crio[724]: time="2024-07-31 21:18:37.087633119Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f9e2e1cd7228207faa2ff002f8cc03a9d98f4b388fd71a1a40ca232a76f4a0,PodSandboxId:6f50b87f8d38e0669b501d0fe348820500f954a6232648339095dfef3e528fcc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459839624805026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bqgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9010990b-36d5-4c0d-adc9-5d9483bd5d44,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2db39be0da84703a46cef6c35ba354293c592070c9cc009a65504d253aa91b51,PodSandboxId:685e4d79b333b85663a9cf9b0fa403094552066244f0e300fbbbb075aea29b93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459839583366097,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-9qnjq,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2350f15d-0e3d-429f-a21f-8cbd41407d7e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c67c2b844ed877679873aa317c98c7e636bb9c9d0f42ada12a2f9388996b9fec,PodSandboxId:c53427f450026f0d3335d9b48da02391ebc00f95b5c1efe8ade865bac6db4af0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1722459839078526963,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bfc781b-1370-4460-8018-a1279e37b39d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f2f77153e80ecfd09afe39cb5e448010a1de1a1fca2bc54834e055477f5c11,PodSandboxId:616f0efde70f8c7adfb51dd6e6975a6c9ae8b56b6a7e5fa24af54729e5d42a94,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1722459838069264460,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4h2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 328ebd98-accf-43da-ae60-40fc93f34116,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21f0dca410fa197640a5a8e99f24dd152c79027eaa2d252767d3690691a6042,PodSandboxId:5b2fc40540948899d1e8a477dcc86d7a1aba1ed9ec66c35877477f38d707c5f3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722459826550983146,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df9db57d18dc788fa09a42bf2fd340c3,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f5e4d14b86360dbe264196f7a74b632354f25386ad11233eb96c6a134d77959,PodSandboxId:25090de3a63b61aa61459b0e51e8db808fc8a9ef37c479e4d7b0ea913f589128,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722459826548105160,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39a7cebf4279e5eab84727743fe9b711,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c90a9147c6edc05d39c3f78f5d2597a437b773c93a1324f79a2faa7ed03aa9,PodSandboxId:3275ffc9c3ba372d2a5bd99eb803e165221016c4e39eb025a82c9c76b937251e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722459826487201402,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077a0ac9e1a343879e95368a267db6cd,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d924058498dcb3735758da513f70d60e4974b432aa6434aa13593b6ff22d360,PodSandboxId:9cb5caa5f60eec7ff5fcdafb0b064ad85dcc04a9d1b371989e32786d1cdde540,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722459826489063733,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b005f25a852b06b06ff5498175ec2f7,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3169bc1dd903b3e599effab3c233eb4cbd4b31468090864ee6f8909cf2635b0,PodSandboxId:fc5a36a16cf030507bf30af46d5629e2163912b58f508378b5a7f67564e725a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722459539719672412,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39a7cebf4279e5eab84727743fe9b711,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=43e43be0-db8c-4ca3-86c6-f844f7aeaca8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:18:37 no-preload-916885 crio[724]: time="2024-07-31 21:18:37.143195593Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a46c712d-932d-465f-b8d0-8acd2c5f640a name=/runtime.v1.RuntimeService/Version
	Jul 31 21:18:37 no-preload-916885 crio[724]: time="2024-07-31 21:18:37.143291478Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a46c712d-932d-465f-b8d0-8acd2c5f640a name=/runtime.v1.RuntimeService/Version
	Jul 31 21:18:37 no-preload-916885 crio[724]: time="2024-07-31 21:18:37.144675091Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=775afc86-f08c-44fd-a6af-ebb689f59bba name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:18:37 no-preload-916885 crio[724]: time="2024-07-31 21:18:37.145193757Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460717145161709,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=775afc86-f08c-44fd-a6af-ebb689f59bba name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:18:37 no-preload-916885 crio[724]: time="2024-07-31 21:18:37.146120878Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=44383139-d096-4df9-82b9-9e9328e07d30 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:18:37 no-preload-916885 crio[724]: time="2024-07-31 21:18:37.146232156Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=44383139-d096-4df9-82b9-9e9328e07d30 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:18:37 no-preload-916885 crio[724]: time="2024-07-31 21:18:37.146609097Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f9e2e1cd7228207faa2ff002f8cc03a9d98f4b388fd71a1a40ca232a76f4a0,PodSandboxId:6f50b87f8d38e0669b501d0fe348820500f954a6232648339095dfef3e528fcc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459839624805026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bqgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9010990b-36d5-4c0d-adc9-5d9483bd5d44,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2db39be0da84703a46cef6c35ba354293c592070c9cc009a65504d253aa91b51,PodSandboxId:685e4d79b333b85663a9cf9b0fa403094552066244f0e300fbbbb075aea29b93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459839583366097,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-9qnjq,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2350f15d-0e3d-429f-a21f-8cbd41407d7e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c67c2b844ed877679873aa317c98c7e636bb9c9d0f42ada12a2f9388996b9fec,PodSandboxId:c53427f450026f0d3335d9b48da02391ebc00f95b5c1efe8ade865bac6db4af0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1722459839078526963,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bfc781b-1370-4460-8018-a1279e37b39d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f2f77153e80ecfd09afe39cb5e448010a1de1a1fca2bc54834e055477f5c11,PodSandboxId:616f0efde70f8c7adfb51dd6e6975a6c9ae8b56b6a7e5fa24af54729e5d42a94,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1722459838069264460,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4h2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 328ebd98-accf-43da-ae60-40fc93f34116,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21f0dca410fa197640a5a8e99f24dd152c79027eaa2d252767d3690691a6042,PodSandboxId:5b2fc40540948899d1e8a477dcc86d7a1aba1ed9ec66c35877477f38d707c5f3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722459826550983146,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df9db57d18dc788fa09a42bf2fd340c3,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f5e4d14b86360dbe264196f7a74b632354f25386ad11233eb96c6a134d77959,PodSandboxId:25090de3a63b61aa61459b0e51e8db808fc8a9ef37c479e4d7b0ea913f589128,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722459826548105160,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39a7cebf4279e5eab84727743fe9b711,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c90a9147c6edc05d39c3f78f5d2597a437b773c93a1324f79a2faa7ed03aa9,PodSandboxId:3275ffc9c3ba372d2a5bd99eb803e165221016c4e39eb025a82c9c76b937251e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722459826487201402,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077a0ac9e1a343879e95368a267db6cd,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d924058498dcb3735758da513f70d60e4974b432aa6434aa13593b6ff22d360,PodSandboxId:9cb5caa5f60eec7ff5fcdafb0b064ad85dcc04a9d1b371989e32786d1cdde540,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722459826489063733,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b005f25a852b06b06ff5498175ec2f7,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3169bc1dd903b3e599effab3c233eb4cbd4b31468090864ee6f8909cf2635b0,PodSandboxId:fc5a36a16cf030507bf30af46d5629e2163912b58f508378b5a7f67564e725a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722459539719672412,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39a7cebf4279e5eab84727743fe9b711,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=44383139-d096-4df9-82b9-9e9328e07d30 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:18:37 no-preload-916885 crio[724]: time="2024-07-31 21:18:37.182205673Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9b40cef8-d0b3-4154-9cde-01b7d94e1c24 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:18:37 no-preload-916885 crio[724]: time="2024-07-31 21:18:37.182294354Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9b40cef8-d0b3-4154-9cde-01b7d94e1c24 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:18:37 no-preload-916885 crio[724]: time="2024-07-31 21:18:37.183874323Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ba66ac5f-f6aa-440e-8c58-dbce5c0c64ed name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:18:37 no-preload-916885 crio[724]: time="2024-07-31 21:18:37.184231257Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460717184208556,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ba66ac5f-f6aa-440e-8c58-dbce5c0c64ed name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:18:37 no-preload-916885 crio[724]: time="2024-07-31 21:18:37.184916822Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=768eb741-2b77-421d-84c1-292167937420 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:18:37 no-preload-916885 crio[724]: time="2024-07-31 21:18:37.184989691Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=768eb741-2b77-421d-84c1-292167937420 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:18:37 no-preload-916885 crio[724]: time="2024-07-31 21:18:37.185246540Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f9e2e1cd7228207faa2ff002f8cc03a9d98f4b388fd71a1a40ca232a76f4a0,PodSandboxId:6f50b87f8d38e0669b501d0fe348820500f954a6232648339095dfef3e528fcc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459839624805026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bqgfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9010990b-36d5-4c0d-adc9-5d9483bd5d44,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2db39be0da84703a46cef6c35ba354293c592070c9cc009a65504d253aa91b51,PodSandboxId:685e4d79b333b85663a9cf9b0fa403094552066244f0e300fbbbb075aea29b93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459839583366097,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-9qnjq,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2350f15d-0e3d-429f-a21f-8cbd41407d7e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c67c2b844ed877679873aa317c98c7e636bb9c9d0f42ada12a2f9388996b9fec,PodSandboxId:c53427f450026f0d3335d9b48da02391ebc00f95b5c1efe8ade865bac6db4af0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1722459839078526963,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bfc781b-1370-4460-8018-a1279e37b39d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f2f77153e80ecfd09afe39cb5e448010a1de1a1fca2bc54834e055477f5c11,PodSandboxId:616f0efde70f8c7adfb51dd6e6975a6c9ae8b56b6a7e5fa24af54729e5d42a94,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1722459838069264460,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4h2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 328ebd98-accf-43da-ae60-40fc93f34116,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21f0dca410fa197640a5a8e99f24dd152c79027eaa2d252767d3690691a6042,PodSandboxId:5b2fc40540948899d1e8a477dcc86d7a1aba1ed9ec66c35877477f38d707c5f3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722459826550983146,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df9db57d18dc788fa09a42bf2fd340c3,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f5e4d14b86360dbe264196f7a74b632354f25386ad11233eb96c6a134d77959,PodSandboxId:25090de3a63b61aa61459b0e51e8db808fc8a9ef37c479e4d7b0ea913f589128,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722459826548105160,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39a7cebf4279e5eab84727743fe9b711,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c90a9147c6edc05d39c3f78f5d2597a437b773c93a1324f79a2faa7ed03aa9,PodSandboxId:3275ffc9c3ba372d2a5bd99eb803e165221016c4e39eb025a82c9c76b937251e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722459826487201402,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077a0ac9e1a343879e95368a267db6cd,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d924058498dcb3735758da513f70d60e4974b432aa6434aa13593b6ff22d360,PodSandboxId:9cb5caa5f60eec7ff5fcdafb0b064ad85dcc04a9d1b371989e32786d1cdde540,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722459826489063733,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b005f25a852b06b06ff5498175ec2f7,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3169bc1dd903b3e599effab3c233eb4cbd4b31468090864ee6f8909cf2635b0,PodSandboxId:fc5a36a16cf030507bf30af46d5629e2163912b58f508378b5a7f67564e725a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722459539719672412,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-916885,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39a7cebf4279e5eab84727743fe9b711,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=768eb741-2b77-421d-84c1-292167937420 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	70f9e2e1cd722       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   6f50b87f8d38e       coredns-5cfdc65f69-bqgfg
	2db39be0da847       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   685e4d79b333b       coredns-5cfdc65f69-9qnjq
	c67c2b844ed87       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   c53427f450026       storage-provisioner
	55f2f77153e80       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   14 minutes ago      Running             kube-proxy                0                   616f0efde70f8       kube-proxy-b4h2z
	d21f0dca410fa       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   14 minutes ago      Running             etcd                      2                   5b2fc40540948       etcd-no-preload-916885
	9f5e4d14b8636       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   14 minutes ago      Running             kube-apiserver            2                   25090de3a63b6       kube-apiserver-no-preload-916885
	7d924058498dc       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   14 minutes ago      Running             kube-scheduler            2                   9cb5caa5f60ee       kube-scheduler-no-preload-916885
	01c90a9147c6e       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   14 minutes ago      Running             kube-controller-manager   2                   3275ffc9c3ba3       kube-controller-manager-no-preload-916885
	a3169bc1dd903       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   19 minutes ago      Exited              kube-apiserver            1                   fc5a36a16cf03       kube-apiserver-no-preload-916885
	
	
	==> coredns [2db39be0da84703a46cef6c35ba354293c592070c9cc009a65504d253aa91b51] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [70f9e2e1cd7228207faa2ff002f8cc03a9d98f4b388fd71a1a40ca232a76f4a0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-916885
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-916885
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825
	                    minikube.k8s.io/name=no-preload-916885
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T21_03_53_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 21:03:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-916885
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 21:18:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 21:14:15 +0000   Wed, 31 Jul 2024 21:03:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 21:14:15 +0000   Wed, 31 Jul 2024 21:03:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 21:14:15 +0000   Wed, 31 Jul 2024 21:03:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 21:14:15 +0000   Wed, 31 Jul 2024 21:03:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.239
	  Hostname:    no-preload-916885
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8c9e297d34d5406dae42fd7877c69eaf
	  System UUID:                8c9e297d-34d5-406d-ae42-fd7877c69eaf
	  Boot ID:                    80b9904a-fd63-485d-85db-7980941c521e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-9qnjq                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-5cfdc65f69-bqgfg                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-916885                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-no-preload-916885             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-no-preload-916885    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-b4h2z                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-916885             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-78fcd8795b-86m8h              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x6 over 14m)  kubelet          Node no-preload-916885 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x5 over 14m)  kubelet          Node no-preload-916885 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x5 over 14m)  kubelet          Node no-preload-916885 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node no-preload-916885 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node no-preload-916885 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node no-preload-916885 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node no-preload-916885 event: Registered Node no-preload-916885 in Controller
	
	
	==> dmesg <==
	[  +0.050800] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039506] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.742462] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.540903] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.370471] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.695858] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.060259] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053881] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.156929] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.172507] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.302205] systemd-fstab-generator[708]: Ignoring "noauto" option for root device
	[ +14.856787] systemd-fstab-generator[1179]: Ignoring "noauto" option for root device
	[  +0.060271] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.776849] systemd-fstab-generator[1299]: Ignoring "noauto" option for root device
	[Jul31 20:59] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.439040] kauditd_printk_skb: 88 callbacks suppressed
	[Jul31 21:03] kauditd_printk_skb: 4 callbacks suppressed
	[  +1.015409] systemd-fstab-generator[2967]: Ignoring "noauto" option for root device
	[  +4.598244] kauditd_printk_skb: 56 callbacks suppressed
	[  +2.454665] systemd-fstab-generator[3290]: Ignoring "noauto" option for root device
	[  +4.933013] systemd-fstab-generator[3401]: Ignoring "noauto" option for root device
	[  +0.095244] kauditd_printk_skb: 14 callbacks suppressed
	[Jul31 21:04] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [d21f0dca410fa197640a5a8e99f24dd152c79027eaa2d252767d3690691a6042] <==
	{"level":"info","ts":"2024-07-31T21:03:47.085498Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"97631f5e3b276dee","initial-advertise-peer-urls":["https://192.168.72.239:2380"],"listen-peer-urls":["https://192.168.72.239:2380"],"advertise-client-urls":["https://192.168.72.239:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.239:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T21:03:47.085617Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T21:03:47.728526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97631f5e3b276dee is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-31T21:03:47.728633Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97631f5e3b276dee became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-31T21:03:47.728669Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97631f5e3b276dee received MsgPreVoteResp from 97631f5e3b276dee at term 1"}
	{"level":"info","ts":"2024-07-31T21:03:47.728698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97631f5e3b276dee became candidate at term 2"}
	{"level":"info","ts":"2024-07-31T21:03:47.728723Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97631f5e3b276dee received MsgVoteResp from 97631f5e3b276dee at term 2"}
	{"level":"info","ts":"2024-07-31T21:03:47.728749Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97631f5e3b276dee became leader at term 2"}
	{"level":"info","ts":"2024-07-31T21:03:47.728784Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 97631f5e3b276dee elected leader 97631f5e3b276dee at term 2"}
	{"level":"info","ts":"2024-07-31T21:03:47.733673Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"97631f5e3b276dee","local-member-attributes":"{Name:no-preload-916885 ClientURLs:[https://192.168.72.239:2379]}","request-path":"/0/members/97631f5e3b276dee/attributes","cluster-id":"df08e509b174dc93","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T21:03:47.733902Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:03:47.734541Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T21:03:47.734729Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T21:03:47.734764Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T21:03:47.734821Z","caller":"etcdserver/server.go:2628","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T21:03:47.738985Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-31T21:03:47.740422Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"df08e509b174dc93","local-member-id":"97631f5e3b276dee","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T21:03:47.743758Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T21:03:47.744053Z","caller":"etcdserver/server.go:2652","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T21:03:47.74465Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T21:03:47.747323Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-31T21:03:47.750046Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.239:2379"}
	{"level":"info","ts":"2024-07-31T21:13:47.810783Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":705}
	{"level":"info","ts":"2024-07-31T21:13:47.820686Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":705,"took":"9.536648ms","hash":84757775,"current-db-size-bytes":2220032,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2220032,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-07-31T21:13:47.820752Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":84757775,"revision":705,"compact-revision":-1}
	
	
	==> kernel <==
	 21:18:37 up 20 min,  0 users,  load average: 0.15, 0.28, 0.25
	Linux no-preload-916885 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9f5e4d14b86360dbe264196f7a74b632354f25386ad11233eb96c6a134d77959] <==
	W0731 21:13:50.702932       1 handler_proxy.go:99] no RequestInfo found in the context
	E0731 21:13:50.703091       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0731 21:13:50.704333       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0731 21:13:50.704378       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:14:50.705174       1 handler_proxy.go:99] no RequestInfo found in the context
	E0731 21:14:50.705431       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0731 21:14:50.705650       1 handler_proxy.go:99] no RequestInfo found in the context
	E0731 21:14:50.705734       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0731 21:14:50.706784       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0731 21:14:50.706817       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:16:50.706975       1 handler_proxy.go:99] no RequestInfo found in the context
	E0731 21:16:50.707090       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0731 21:16:50.707193       1 handler_proxy.go:99] no RequestInfo found in the context
	E0731 21:16:50.707266       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0731 21:16:50.708273       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0731 21:16:50.708349       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [a3169bc1dd903b3e599effab3c233eb4cbd4b31468090864ee6f8909cf2635b0] <==
	W0731 21:03:40.070433       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.082127       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.116844       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.139799       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.141267       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.147006       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.162824       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.195196       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.204866       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.246774       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.270780       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.311820       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.334086       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.336690       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.344346       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.418017       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.450259       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.450521       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.475056       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.506850       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.700685       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.734676       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.784396       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:40.857342       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 21:03:41.100790       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [01c90a9147c6edc05d39c3f78f5d2597a437b773c93a1324f79a2faa7ed03aa9] <==
	E0731 21:13:27.658091       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:13:27.812512       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:13:57.664599       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:13:57.824590       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 21:14:15.136589       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-916885"
	E0731 21:14:27.671632       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:14:27.832525       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:14:57.678882       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:14:57.841152       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 21:14:58.759526       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="167.17µs"
	I0731 21:15:11.747024       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="108.389µs"
	E0731 21:15:27.685917       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:15:27.850990       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:15:57.693281       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:15:57.859884       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:16:27.700200       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:16:27.869881       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:16:57.706983       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:16:57.879366       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:17:27.714179       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:17:27.888174       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:17:57.721865       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:17:57.897079       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:18:27.729335       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0731 21:18:27.906444       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [55f2f77153e80ecfd09afe39cb5e448010a1de1a1fca2bc54834e055477f5c11] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0731 21:03:58.387748       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0731 21:03:58.426812       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.72.239"]
	E0731 21:03:58.427047       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0731 21:03:58.623623       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0731 21:03:58.623674       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 21:03:58.623705       1 server_linux.go:170] "Using iptables Proxier"
	I0731 21:03:58.629824       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0731 21:03:58.630130       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0731 21:03:58.630160       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 21:03:58.633402       1 config.go:197] "Starting service config controller"
	I0731 21:03:58.633572       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 21:03:58.633652       1 config.go:104] "Starting endpoint slice config controller"
	I0731 21:03:58.633658       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 21:03:58.634323       1 config.go:326] "Starting node config controller"
	I0731 21:03:58.634354       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 21:03:58.734243       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 21:03:58.734361       1 shared_informer.go:320] Caches are synced for service config
	I0731 21:03:58.734684       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7d924058498dcb3735758da513f70d60e4974b432aa6434aa13593b6ff22d360] <==
	W0731 21:03:50.657367       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 21:03:50.657436       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0731 21:03:50.667601       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 21:03:50.667700       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0731 21:03:50.746634       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 21:03:50.746705       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0731 21:03:50.845686       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 21:03:50.845749       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0731 21:03:50.870199       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 21:03:50.870317       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0731 21:03:50.917156       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 21:03:50.917270       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0731 21:03:50.936319       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 21:03:50.936432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0731 21:03:50.971682       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 21:03:50.971904       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0731 21:03:50.978943       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 21:03:50.979122       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0731 21:03:51.019920       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 21:03:51.020002       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0731 21:03:51.057370       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 21:03:51.057638       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0731 21:03:51.171879       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 21:03:51.171993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0731 21:03:53.313567       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 21:15:52 no-preload-916885 kubelet[3297]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:15:52 no-preload-916885 kubelet[3297]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:15:52 no-preload-916885 kubelet[3297]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:16:04 no-preload-916885 kubelet[3297]: E0731 21:16:04.731517    3297 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-86m8h" podUID="3c4df12a-3d52-48dc-9998-587565d13dca"
	Jul 31 21:16:16 no-preload-916885 kubelet[3297]: E0731 21:16:16.733022    3297 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-86m8h" podUID="3c4df12a-3d52-48dc-9998-587565d13dca"
	Jul 31 21:16:29 no-preload-916885 kubelet[3297]: E0731 21:16:29.730268    3297 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-86m8h" podUID="3c4df12a-3d52-48dc-9998-587565d13dca"
	Jul 31 21:16:42 no-preload-916885 kubelet[3297]: E0731 21:16:42.734254    3297 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-86m8h" podUID="3c4df12a-3d52-48dc-9998-587565d13dca"
	Jul 31 21:16:52 no-preload-916885 kubelet[3297]: E0731 21:16:52.780609    3297 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 21:16:52 no-preload-916885 kubelet[3297]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:16:52 no-preload-916885 kubelet[3297]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:16:52 no-preload-916885 kubelet[3297]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:16:52 no-preload-916885 kubelet[3297]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:16:55 no-preload-916885 kubelet[3297]: E0731 21:16:55.731122    3297 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-86m8h" podUID="3c4df12a-3d52-48dc-9998-587565d13dca"
	Jul 31 21:17:06 no-preload-916885 kubelet[3297]: E0731 21:17:06.731067    3297 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-86m8h" podUID="3c4df12a-3d52-48dc-9998-587565d13dca"
	Jul 31 21:17:21 no-preload-916885 kubelet[3297]: E0731 21:17:21.731416    3297 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-86m8h" podUID="3c4df12a-3d52-48dc-9998-587565d13dca"
	Jul 31 21:17:36 no-preload-916885 kubelet[3297]: E0731 21:17:36.731253    3297 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-86m8h" podUID="3c4df12a-3d52-48dc-9998-587565d13dca"
	Jul 31 21:17:49 no-preload-916885 kubelet[3297]: E0731 21:17:49.730280    3297 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-86m8h" podUID="3c4df12a-3d52-48dc-9998-587565d13dca"
	Jul 31 21:17:52 no-preload-916885 kubelet[3297]: E0731 21:17:52.782731    3297 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 21:17:52 no-preload-916885 kubelet[3297]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:17:52 no-preload-916885 kubelet[3297]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:17:52 no-preload-916885 kubelet[3297]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:17:52 no-preload-916885 kubelet[3297]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:18:04 no-preload-916885 kubelet[3297]: E0731 21:18:04.732059    3297 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-86m8h" podUID="3c4df12a-3d52-48dc-9998-587565d13dca"
	Jul 31 21:18:17 no-preload-916885 kubelet[3297]: E0731 21:18:17.730809    3297 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-86m8h" podUID="3c4df12a-3d52-48dc-9998-587565d13dca"
	Jul 31 21:18:29 no-preload-916885 kubelet[3297]: E0731 21:18:29.730941    3297 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-86m8h" podUID="3c4df12a-3d52-48dc-9998-587565d13dca"
	
	
	==> storage-provisioner [c67c2b844ed877679873aa317c98c7e636bb9c9d0f42ada12a2f9388996b9fec] <==
	I0731 21:03:59.248646       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 21:03:59.271707       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 21:03:59.271838       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 21:03:59.288164       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 21:03:59.290657       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-916885_99091271-805e-47df-97b1-345f1aaa81f8!
	I0731 21:03:59.294573       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ded94fb0-c2da-4687-a958-6ba7dca940bb", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-916885_99091271-805e-47df-97b1-345f1aaa81f8 became leader
	I0731 21:03:59.391550       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-916885_99091271-805e-47df-97b1-345f1aaa81f8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-916885 -n no-preload-916885
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-916885 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-86m8h
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-916885 describe pod metrics-server-78fcd8795b-86m8h
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-916885 describe pod metrics-server-78fcd8795b-86m8h: exit status 1 (68.344194ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-86m8h" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-916885 describe pod metrics-server-78fcd8795b-86m8h: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (326.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (384.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-831240 -n embed-certs-831240
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-31 21:19:53.067611593 +0000 UTC m=+6765.317969426
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-831240 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-831240 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.854µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-831240 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-831240 -n embed-certs-831240
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-831240 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-831240 logs -n 25: (1.166566529s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-916885                                   | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-125614  | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC | 31 Jul 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC |                     |
	|         | default-k8s-diff-port-125614                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-239115        | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-831240                 | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-831240                                  | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC | 31 Jul 24 21:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-916885                  | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-916885 --memory=2200                     | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 20:54 UTC | 31 Jul 24 21:04 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-125614       | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:54 UTC | 31 Jul 24 21:03 UTC |
	|         | default-k8s-diff-port-125614                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-239115                              | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:55 UTC | 31 Jul 24 20:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-239115             | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:55 UTC | 31 Jul 24 20:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-239115                              | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-239115                              | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 21:18 UTC | 31 Jul 24 21:18 UTC |
	| start   | -p newest-cni-586791 --memory=2200 --alsologtostderr   | newest-cni-586791            | jenkins | v1.33.1 | 31 Jul 24 21:18 UTC | 31 Jul 24 21:18 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| delete  | -p no-preload-916885                                   | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 21:18 UTC | 31 Jul 24 21:18 UTC |
	| addons  | enable metrics-server -p newest-cni-586791             | newest-cni-586791            | jenkins | v1.33.1 | 31 Jul 24 21:18 UTC | 31 Jul 24 21:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-586791                                   | newest-cni-586791            | jenkins | v1.33.1 | 31 Jul 24 21:18 UTC | 31 Jul 24 21:19 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-586791                  | newest-cni-586791            | jenkins | v1.33.1 | 31 Jul 24 21:19 UTC | 31 Jul 24 21:19 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-586791 --memory=2200 --alsologtostderr   | newest-cni-586791            | jenkins | v1.33.1 | 31 Jul 24 21:19 UTC | 31 Jul 24 21:19 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| image   | newest-cni-586791 image list                           | newest-cni-586791            | jenkins | v1.33.1 | 31 Jul 24 21:19 UTC | 31 Jul 24 21:19 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-586791                                   | newest-cni-586791            | jenkins | v1.33.1 | 31 Jul 24 21:19 UTC | 31 Jul 24 21:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-586791                                   | newest-cni-586791            | jenkins | v1.33.1 | 31 Jul 24 21:19 UTC | 31 Jul 24 21:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-586791                                   | newest-cni-586791            | jenkins | v1.33.1 | 31 Jul 24 21:19 UTC | 31 Jul 24 21:19 UTC |
	| delete  | -p newest-cni-586791                                   | newest-cni-586791            | jenkins | v1.33.1 | 31 Jul 24 21:19 UTC | 31 Jul 24 21:19 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 21:19:05
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 21:19:05.761771  195816 out.go:291] Setting OutFile to fd 1 ...
	I0731 21:19:05.761889  195816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:19:05.761897  195816 out.go:304] Setting ErrFile to fd 2...
	I0731 21:19:05.761901  195816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:19:05.762080  195816 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 21:19:05.762593  195816 out.go:298] Setting JSON to false
	I0731 21:19:05.763476  195816 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10882,"bootTime":1722449864,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 21:19:05.763532  195816 start.go:139] virtualization: kvm guest
	I0731 21:19:05.765667  195816 out.go:177] * [newest-cni-586791] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 21:19:05.767081  195816 notify.go:220] Checking for updates...
	I0731 21:19:05.767099  195816 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 21:19:05.768459  195816 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 21:19:05.769907  195816 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 21:19:05.771337  195816 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 21:19:05.772648  195816 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 21:19:05.773990  195816 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 21:19:05.775596  195816 config.go:182] Loaded profile config "newest-cni-586791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 21:19:05.775958  195816 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:19:05.776003  195816 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:19:05.791822  195816 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37327
	I0731 21:19:05.792250  195816 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:19:05.792776  195816 main.go:141] libmachine: Using API Version  1
	I0731 21:19:05.792799  195816 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:19:05.793105  195816 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:19:05.793294  195816 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:19:05.793590  195816 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 21:19:05.793882  195816 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:19:05.793920  195816 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:19:05.810777  195816 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39825
	I0731 21:19:05.811278  195816 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:19:05.811803  195816 main.go:141] libmachine: Using API Version  1
	I0731 21:19:05.811826  195816 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:19:05.812122  195816 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:19:05.812294  195816 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:19:05.846760  195816 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 21:19:05.848266  195816 start.go:297] selected driver: kvm2
	I0731 21:19:05.848283  195816 start.go:901] validating driver "kvm2" against &{Name:newest-cni-586791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0-beta.0 ClusterName:newest-cni-586791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system
_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:19:05.848437  195816 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 21:19:05.849205  195816 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:19:05.849282  195816 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-121704/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 21:19:05.864357  195816 install.go:137] /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0731 21:19:05.864764  195816 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0731 21:19:05.864843  195816 cni.go:84] Creating CNI manager for ""
	I0731 21:19:05.864864  195816 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:19:05.864906  195816 start.go:340] cluster config:
	{Name:newest-cni-586791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-586791 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAd
dress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:19:05.865016  195816 iso.go:125] acquiring lock: {Name:mk69fc0fe37180b19ce91307b5e12ab2d0bd69fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:19:05.866876  195816 out.go:177] * Starting "newest-cni-586791" primary control-plane node in "newest-cni-586791" cluster
	I0731 21:19:05.868074  195816 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 21:19:05.868111  195816 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0731 21:19:05.868122  195816 cache.go:56] Caching tarball of preloaded images
	I0731 21:19:05.868210  195816 preload.go:172] Found /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 21:19:05.868221  195816 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0731 21:19:05.868314  195816 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/newest-cni-586791/config.json ...
	I0731 21:19:05.868485  195816 start.go:360] acquireMachinesLock for newest-cni-586791: {Name:mk0ee20c9dba367bb5e62f6affdfe6f589095d2a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 21:19:05.868555  195816 start.go:364] duration metric: took 51.983µs to acquireMachinesLock for "newest-cni-586791"
	I0731 21:19:05.868571  195816 start.go:96] Skipping create...Using existing machine configuration
	I0731 21:19:05.868579  195816 fix.go:54] fixHost starting: 
	I0731 21:19:05.868864  195816 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:19:05.868896  195816 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:19:05.884338  195816 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40253
	I0731 21:19:05.884817  195816 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:19:05.885288  195816 main.go:141] libmachine: Using API Version  1
	I0731 21:19:05.885303  195816 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:19:05.885681  195816 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:19:05.885899  195816 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:19:05.886084  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetState
	I0731 21:19:05.887945  195816 fix.go:112] recreateIfNeeded on newest-cni-586791: state=Stopped err=<nil>
	I0731 21:19:05.887987  195816 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	W0731 21:19:05.888180  195816 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 21:19:05.890790  195816 out.go:177] * Restarting existing kvm2 VM for "newest-cni-586791" ...
	I0731 21:19:05.892018  195816 main.go:141] libmachine: (newest-cni-586791) Calling .Start
	I0731 21:19:05.892197  195816 main.go:141] libmachine: (newest-cni-586791) Ensuring networks are active...
	I0731 21:19:05.892908  195816 main.go:141] libmachine: (newest-cni-586791) Ensuring network default is active
	I0731 21:19:05.893261  195816 main.go:141] libmachine: (newest-cni-586791) Ensuring network mk-newest-cni-586791 is active
	I0731 21:19:05.893620  195816 main.go:141] libmachine: (newest-cni-586791) Getting domain xml...
	I0731 21:19:05.894297  195816 main.go:141] libmachine: (newest-cni-586791) Creating domain...
	I0731 21:19:07.138447  195816 main.go:141] libmachine: (newest-cni-586791) Waiting to get IP...
	I0731 21:19:07.139504  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:07.139923  195816 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:19:07.140000  195816 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:19:07.139908  195851 retry.go:31] will retry after 254.920523ms: waiting for machine to come up
	I0731 21:19:07.396542  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:07.397038  195816 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:19:07.397061  195816 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:19:07.396985  195851 retry.go:31] will retry after 250.333596ms: waiting for machine to come up
	I0731 21:19:07.649421  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:07.649965  195816 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:19:07.649992  195816 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:19:07.649905  195851 retry.go:31] will retry after 395.636435ms: waiting for machine to come up
	I0731 21:19:08.047593  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:08.047975  195816 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:19:08.048007  195816 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:19:08.047938  195851 retry.go:31] will retry after 436.386926ms: waiting for machine to come up
	I0731 21:19:08.485674  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:08.486135  195816 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:19:08.486165  195816 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:19:08.486089  195851 retry.go:31] will retry after 490.347633ms: waiting for machine to come up
	I0731 21:19:08.977949  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:08.978481  195816 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:19:08.978512  195816 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:19:08.978429  195851 retry.go:31] will retry after 623.333636ms: waiting for machine to come up
	I0731 21:19:09.602897  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:09.603418  195816 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:19:09.603447  195816 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:19:09.603359  195851 retry.go:31] will retry after 996.812783ms: waiting for machine to come up
	I0731 21:19:10.601466  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:10.601947  195816 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:19:10.601977  195816 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:19:10.601898  195851 retry.go:31] will retry after 1.289057078s: waiting for machine to come up
	I0731 21:19:11.892558  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:11.892995  195816 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:19:11.893027  195816 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:19:11.892953  195851 retry.go:31] will retry after 1.739936764s: waiting for machine to come up
	I0731 21:19:13.634458  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:13.634910  195816 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:19:13.634942  195816 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:19:13.634861  195851 retry.go:31] will retry after 1.886570052s: waiting for machine to come up
	I0731 21:19:15.523611  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:15.524088  195816 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:19:15.524119  195816 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:19:15.524037  195851 retry.go:31] will retry after 2.741852261s: waiting for machine to come up
	I0731 21:19:18.267418  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:18.267884  195816 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:19:18.267911  195816 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:19:18.267838  195851 retry.go:31] will retry after 2.817878514s: waiting for machine to come up
	I0731 21:19:21.087488  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:21.087925  195816 main.go:141] libmachine: (newest-cni-586791) DBG | unable to find current IP address of domain newest-cni-586791 in network mk-newest-cni-586791
	I0731 21:19:21.087962  195816 main.go:141] libmachine: (newest-cni-586791) DBG | I0731 21:19:21.087888  195851 retry.go:31] will retry after 3.35967442s: waiting for machine to come up
	I0731 21:19:24.451374  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:24.451865  195816 main.go:141] libmachine: (newest-cni-586791) Found IP for machine: 192.168.61.136
	I0731 21:19:24.451885  195816 main.go:141] libmachine: (newest-cni-586791) Reserving static IP address...
	I0731 21:19:24.451898  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has current primary IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:24.452592  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "newest-cni-586791", mac: "52:54:00:c5:e4:c3", ip: "192.168.61.136"} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:24.452621  195816 main.go:141] libmachine: (newest-cni-586791) Reserved static IP address: 192.168.61.136
	I0731 21:19:24.452634  195816 main.go:141] libmachine: (newest-cni-586791) DBG | skip adding static IP to network mk-newest-cni-586791 - found existing host DHCP lease matching {name: "newest-cni-586791", mac: "52:54:00:c5:e4:c3", ip: "192.168.61.136"}
	I0731 21:19:24.452645  195816 main.go:141] libmachine: (newest-cni-586791) DBG | Getting to WaitForSSH function...
	I0731 21:19:24.452658  195816 main.go:141] libmachine: (newest-cni-586791) Waiting for SSH to be available...
	I0731 21:19:24.455301  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:24.455684  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:24.455718  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:24.455790  195816 main.go:141] libmachine: (newest-cni-586791) DBG | Using SSH client type: external
	I0731 21:19:24.455837  195816 main.go:141] libmachine: (newest-cni-586791) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791/id_rsa (-rw-------)
	I0731 21:19:24.455881  195816 main.go:141] libmachine: (newest-cni-586791) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:19:24.455898  195816 main.go:141] libmachine: (newest-cni-586791) DBG | About to run SSH command:
	I0731 21:19:24.455909  195816 main.go:141] libmachine: (newest-cni-586791) DBG | exit 0
	I0731 21:19:24.585776  195816 main.go:141] libmachine: (newest-cni-586791) DBG | SSH cmd err, output: <nil>: 
	I0731 21:19:24.586175  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetConfigRaw
	I0731 21:19:24.586910  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetIP
	I0731 21:19:24.589668  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:24.589997  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:24.590066  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:24.590418  195816 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/newest-cni-586791/config.json ...
	I0731 21:19:24.590652  195816 machine.go:94] provisionDockerMachine start ...
	I0731 21:19:24.590673  195816 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:19:24.590907  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:19:24.593553  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:24.593922  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:24.593945  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:24.594101  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:19:24.594328  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:24.594503  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:24.594639  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:19:24.594827  195816 main.go:141] libmachine: Using SSH client type: native
	I0731 21:19:24.595016  195816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0731 21:19:24.595027  195816 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 21:19:24.705959  195816 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 21:19:24.705996  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetMachineName
	I0731 21:19:24.706261  195816 buildroot.go:166] provisioning hostname "newest-cni-586791"
	I0731 21:19:24.706294  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetMachineName
	I0731 21:19:24.706509  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:19:24.709299  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:24.709673  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:24.709707  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:24.709775  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:19:24.709976  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:24.710140  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:24.710277  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:19:24.710491  195816 main.go:141] libmachine: Using SSH client type: native
	I0731 21:19:24.710694  195816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0731 21:19:24.710710  195816 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-586791 && echo "newest-cni-586791" | sudo tee /etc/hostname
	I0731 21:19:24.840802  195816 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-586791
	
	I0731 21:19:24.840830  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:19:24.843581  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:24.843959  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:24.843982  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:24.844252  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:19:24.844448  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:24.844644  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:24.844782  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:19:24.844933  195816 main.go:141] libmachine: Using SSH client type: native
	I0731 21:19:24.845152  195816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0731 21:19:24.845180  195816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-586791' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-586791/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-586791' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:19:24.969599  195816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:19:24.969631  195816 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 21:19:24.969706  195816 buildroot.go:174] setting up certificates
	I0731 21:19:24.969720  195816 provision.go:84] configureAuth start
	I0731 21:19:24.969740  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetMachineName
	I0731 21:19:24.970090  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetIP
	I0731 21:19:24.973184  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:24.973592  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:24.973646  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:24.973764  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:19:24.976025  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:24.976355  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:24.976394  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:24.976555  195816 provision.go:143] copyHostCerts
	I0731 21:19:24.976607  195816 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 21:19:24.976617  195816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 21:19:24.976683  195816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 21:19:24.976788  195816 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 21:19:24.976797  195816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 21:19:24.976820  195816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 21:19:24.976872  195816 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 21:19:24.976881  195816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 21:19:24.976911  195816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 21:19:24.976979  195816 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.newest-cni-586791 san=[127.0.0.1 192.168.61.136 localhost minikube newest-cni-586791]
	I0731 21:19:25.035238  195816 provision.go:177] copyRemoteCerts
	I0731 21:19:25.035297  195816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:19:25.035330  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:19:25.037856  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:25.038216  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:25.038257  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:25.038475  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:19:25.038660  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:25.038818  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:19:25.038944  195816 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791/id_rsa Username:docker}
	I0731 21:19:25.129256  195816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 21:19:25.157699  195816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 21:19:25.183755  195816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 21:19:25.209683  195816 provision.go:87] duration metric: took 239.949293ms to configureAuth
	I0731 21:19:25.209712  195816 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:19:25.209890  195816 config.go:182] Loaded profile config "newest-cni-586791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 21:19:25.209964  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:19:25.212368  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:25.212729  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:25.212757  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:25.212967  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:19:25.213149  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:25.213322  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:25.213515  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:19:25.213731  195816 main.go:141] libmachine: Using SSH client type: native
	I0731 21:19:25.213905  195816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0731 21:19:25.213922  195816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:19:25.498098  195816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:19:25.498132  195816 machine.go:97] duration metric: took 907.465894ms to provisionDockerMachine
	I0731 21:19:25.498144  195816 start.go:293] postStartSetup for "newest-cni-586791" (driver="kvm2")
	I0731 21:19:25.498159  195816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:19:25.498180  195816 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:19:25.498573  195816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:19:25.498612  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:19:25.501226  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:25.501555  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:25.501582  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:25.501781  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:19:25.501996  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:25.502177  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:19:25.502292  195816 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791/id_rsa Username:docker}
	I0731 21:19:25.592660  195816 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:19:25.596907  195816 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:19:25.596932  195816 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 21:19:25.596986  195816 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 21:19:25.597054  195816 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 21:19:25.597147  195816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 21:19:25.608036  195816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 21:19:25.632418  195816 start.go:296] duration metric: took 134.258032ms for postStartSetup
	I0731 21:19:25.632459  195816 fix.go:56] duration metric: took 19.763879225s for fixHost
	I0731 21:19:25.632488  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:19:25.635194  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:25.635549  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:25.635592  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:25.635764  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:19:25.635963  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:25.636133  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:25.636285  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:19:25.636462  195816 main.go:141] libmachine: Using SSH client type: native
	I0731 21:19:25.636682  195816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0731 21:19:25.636695  195816 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 21:19:25.750024  195816 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722460765.723824092
	
	I0731 21:19:25.750056  195816 fix.go:216] guest clock: 1722460765.723824092
	I0731 21:19:25.750065  195816 fix.go:229] Guest: 2024-07-31 21:19:25.723824092 +0000 UTC Remote: 2024-07-31 21:19:25.632466287 +0000 UTC m=+19.907513448 (delta=91.357805ms)
	I0731 21:19:25.750087  195816 fix.go:200] guest clock delta is within tolerance: 91.357805ms
	I0731 21:19:25.750092  195816 start.go:83] releasing machines lock for "newest-cni-586791", held for 19.881526223s
	I0731 21:19:25.750110  195816 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:19:25.750424  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetIP
	I0731 21:19:25.753075  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:25.753406  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:25.753437  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:25.753610  195816 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:19:25.754084  195816 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:19:25.754250  195816 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:19:25.754341  195816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:19:25.754380  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:19:25.754499  195816 ssh_runner.go:195] Run: cat /version.json
	I0731 21:19:25.754522  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:19:25.756670  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:25.756945  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:25.756974  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:25.757079  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:19:25.757236  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:25.757260  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:25.757449  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:19:25.757488  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:25.757508  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:25.757641  195816 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791/id_rsa Username:docker}
	I0731 21:19:25.757660  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:19:25.757812  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:25.757985  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:19:25.758165  195816 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791/id_rsa Username:docker}
	I0731 21:19:25.864366  195816 ssh_runner.go:195] Run: systemctl --version
	I0731 21:19:25.870216  195816 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:19:26.015775  195816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:19:26.022090  195816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:19:26.022170  195816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:19:26.041572  195816 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:19:26.041598  195816 start.go:495] detecting cgroup driver to use...
	I0731 21:19:26.041685  195816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:19:26.063400  195816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:19:26.078176  195816 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:19:26.078245  195816 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:19:26.092273  195816 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:19:26.106273  195816 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:19:26.229649  195816 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:19:26.400310  195816 docker.go:233] disabling docker service ...
	I0731 21:19:26.400378  195816 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:19:26.415255  195816 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:19:26.429142  195816 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:19:26.573649  195816 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:19:26.702613  195816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:19:26.717331  195816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:19:26.737042  195816 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0731 21:19:26.737117  195816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:19:26.747878  195816 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:19:26.747967  195816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:19:26.759026  195816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:19:26.769442  195816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:19:26.779973  195816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:19:26.790766  195816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:19:26.801848  195816 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:19:26.829376  195816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:19:26.841230  195816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:19:26.851118  195816 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:19:26.851204  195816 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:19:26.866373  195816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:19:26.876299  195816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:19:27.004064  195816 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:19:27.148522  195816 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:19:27.148608  195816 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:19:27.153659  195816 start.go:563] Will wait 60s for crictl version
	I0731 21:19:27.153725  195816 ssh_runner.go:195] Run: which crictl
	I0731 21:19:27.158003  195816 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:19:27.199907  195816 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:19:27.200005  195816 ssh_runner.go:195] Run: crio --version
	I0731 21:19:27.229808  195816 ssh_runner.go:195] Run: crio --version
	I0731 21:19:27.264656  195816 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0731 21:19:27.266274  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetIP
	I0731 21:19:27.269001  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:27.269370  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:27.269398  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:27.269652  195816 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0731 21:19:27.273963  195816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:19:27.289973  195816 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0731 21:19:27.291272  195816 kubeadm.go:883] updating cluster {Name:newest-cni-586791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:newest-cni-586791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:19:27.291408  195816 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 21:19:27.291483  195816 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:19:27.328970  195816 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0731 21:19:27.329063  195816 ssh_runner.go:195] Run: which lz4
	I0731 21:19:27.333268  195816 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 21:19:27.337498  195816 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 21:19:27.337527  195816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (387176433 bytes)
	I0731 21:19:28.692994  195816 crio.go:462] duration metric: took 1.359764132s to copy over tarball
	I0731 21:19:28.693099  195816 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 21:19:30.805542  195816 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.112408075s)
	I0731 21:19:30.805575  195816 crio.go:469] duration metric: took 2.112541998s to extract the tarball
	I0731 21:19:30.805584  195816 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 21:19:30.844752  195816 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:19:30.895835  195816 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 21:19:30.895867  195816 cache_images.go:84] Images are preloaded, skipping loading
	I0731 21:19:30.895878  195816 kubeadm.go:934] updating node { 192.168.61.136 8443 v1.31.0-beta.0 crio true true} ...
	I0731 21:19:30.896013  195816 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-586791 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-586791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:19:30.896099  195816 ssh_runner.go:195] Run: crio config
	I0731 21:19:30.946999  195816 cni.go:84] Creating CNI manager for ""
	I0731 21:19:30.947020  195816 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:19:30.947037  195816 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0731 21:19:30.947059  195816 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.136 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-586791 NodeName:newest-cni-586791 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] Feature
Args:map[] NodeIP:192.168.61.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 21:19:30.947201  195816 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-586791"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:19:30.947272  195816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0731 21:19:30.959102  195816 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:19:30.959185  195816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:19:30.969162  195816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0731 21:19:30.988111  195816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0731 21:19:31.007271  195816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0731 21:19:31.026649  195816 ssh_runner.go:195] Run: grep 192.168.61.136	control-plane.minikube.internal$ /etc/hosts
	I0731 21:19:31.030890  195816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:19:31.044216  195816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:19:31.174258  195816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:19:31.192546  195816 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/newest-cni-586791 for IP: 192.168.61.136
	I0731 21:19:31.192573  195816 certs.go:194] generating shared ca certs ...
	I0731 21:19:31.192594  195816 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:19:31.192789  195816 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 21:19:31.192846  195816 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 21:19:31.192860  195816 certs.go:256] generating profile certs ...
	I0731 21:19:31.192968  195816 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/newest-cni-586791/client.key
	I0731 21:19:31.193042  195816 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/newest-cni-586791/apiserver.key.4c93ecd9
	I0731 21:19:31.193091  195816 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/newest-cni-586791/proxy-client.key
	I0731 21:19:31.193258  195816 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 21:19:31.193308  195816 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 21:19:31.193324  195816 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 21:19:31.193385  195816 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 21:19:31.193427  195816 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:19:31.193462  195816 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 21:19:31.193517  195816 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 21:19:31.194280  195816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:19:31.239627  195816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:19:31.281751  195816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:19:31.324813  195816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 21:19:31.355841  195816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/newest-cni-586791/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 21:19:31.383375  195816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/newest-cni-586791/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 21:19:31.410240  195816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/newest-cni-586791/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:19:31.435825  195816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/newest-cni-586791/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 21:19:31.460792  195816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 21:19:31.485288  195816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:19:31.510081  195816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 21:19:31.533945  195816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:19:31.552444  195816 ssh_runner.go:195] Run: openssl version
	I0731 21:19:31.558469  195816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 21:19:31.569794  195816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 21:19:31.574378  195816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 21:19:31.574452  195816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 21:19:31.580453  195816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 21:19:31.592481  195816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:19:31.605046  195816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:19:31.609667  195816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:19:31.609739  195816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:19:31.615432  195816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:19:31.626653  195816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 21:19:31.637698  195816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 21:19:31.642080  195816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 21:19:31.642132  195816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 21:19:31.648059  195816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 21:19:31.659294  195816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:19:31.663913  195816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 21:19:31.669771  195816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 21:19:31.675581  195816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 21:19:31.682300  195816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 21:19:31.688600  195816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 21:19:31.694478  195816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 21:19:31.700326  195816 kubeadm.go:392] StartCluster: {Name:newest-cni-586791 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:newest-cni-586791 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartH
ostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:19:31.700448  195816 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:19:31.700501  195816 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:19:31.738050  195816 cri.go:89] found id: ""
	I0731 21:19:31.738115  195816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:19:31.748708  195816 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 21:19:31.748730  195816 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 21:19:31.748791  195816 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 21:19:31.758759  195816 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 21:19:31.759577  195816 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-586791" does not appear in /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 21:19:31.760068  195816 kubeconfig.go:62] /home/jenkins/minikube-integration/19355-121704/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-586791" cluster setting kubeconfig missing "newest-cni-586791" context setting]
	I0731 21:19:31.760855  195816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:19:31.762368  195816 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 21:19:31.772447  195816 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.136
	I0731 21:19:31.772480  195816 kubeadm.go:1160] stopping kube-system containers ...
	I0731 21:19:31.772494  195816 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 21:19:31.772548  195816 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:19:31.817842  195816 cri.go:89] found id: ""
	I0731 21:19:31.817929  195816 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 21:19:31.835824  195816 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:19:31.845648  195816 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:19:31.845670  195816 kubeadm.go:157] found existing configuration files:
	
	I0731 21:19:31.845721  195816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:19:31.855627  195816 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:19:31.855692  195816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:19:31.865329  195816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:19:31.874329  195816 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:19:31.874404  195816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:19:31.884650  195816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:19:31.894584  195816 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:19:31.894653  195816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:19:31.905038  195816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:19:31.914559  195816 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:19:31.914622  195816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:19:31.925440  195816 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:19:31.935796  195816 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:19:32.056258  195816 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:19:32.826025  195816 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:19:33.063699  195816 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:19:33.126259  195816 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:19:33.229904  195816 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:19:33.230005  195816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:19:33.731021  195816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:19:34.230174  195816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:19:34.730093  195816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:19:35.230571  195816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:19:35.245555  195816 api_server.go:72] duration metric: took 2.015653275s to wait for apiserver process to appear ...
	I0731 21:19:35.245580  195816 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:19:35.245603  195816 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0731 21:19:35.246026  195816 api_server.go:269] stopped: https://192.168.61.136:8443/healthz: Get "https://192.168.61.136:8443/healthz": dial tcp 192.168.61.136:8443: connect: connection refused
	I0731 21:19:35.745866  195816 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0731 21:19:37.982917  195816 api_server.go:279] https://192.168.61.136:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:19:37.982947  195816 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:19:37.982963  195816 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0731 21:19:38.047150  195816 api_server.go:279] https://192.168.61.136:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 21:19:38.047182  195816 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 21:19:38.246570  195816 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0731 21:19:38.254560  195816 api_server.go:279] https://192.168.61.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:19:38.254594  195816 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:19:38.745689  195816 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0731 21:19:38.750288  195816 api_server.go:279] https://192.168.61.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 21:19:38.750322  195816 api_server.go:103] status: https://192.168.61.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 21:19:39.246508  195816 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0731 21:19:39.250743  195816 api_server.go:279] https://192.168.61.136:8443/healthz returned 200:
	ok
	I0731 21:19:39.257921  195816 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 21:19:39.257953  195816 api_server.go:131] duration metric: took 4.012364546s to wait for apiserver health ...
	I0731 21:19:39.257965  195816 cni.go:84] Creating CNI manager for ""
	I0731 21:19:39.257974  195816 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:19:39.259595  195816 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:19:39.261022  195816 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:19:39.272791  195816 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:19:39.293449  195816 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:19:39.302934  195816 system_pods.go:59] 8 kube-system pods found
	I0731 21:19:39.302972  195816 system_pods.go:61] "coredns-5cfdc65f69-ncmmv" [9d4123f3-0bea-4ddc-9178-8ff3e8c2c903] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:19:39.302983  195816 system_pods.go:61] "etcd-newest-cni-586791" [33a5d651-e33e-4b97-9727-0587fccb79ef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 21:19:39.302993  195816 system_pods.go:61] "kube-apiserver-newest-cni-586791" [d1344d91-f88f-439b-8a35-3c3a5ba7c347] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 21:19:39.303001  195816 system_pods.go:61] "kube-controller-manager-newest-cni-586791" [2f13bf79-a075-464d-be20-3945de8a453b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 21:19:39.303019  195816 system_pods.go:61] "kube-proxy-5w5q8" [f6b5eab7-51b5-43ec-9e7d-c1489107d922] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 21:19:39.303028  195816 system_pods.go:61] "kube-scheduler-newest-cni-586791" [9fb1fafe-762b-40cd-bb68-4f5ab0f69d4b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 21:19:39.303039  195816 system_pods.go:61] "metrics-server-78fcd8795b-f9qfb" [6a57bd4b-35e4-41b8-898c-166e81df7e8c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:19:39.303052  195816 system_pods.go:61] "storage-provisioner" [fbc0ac03-73b2-4a78-8ff7-0f7bd55e91e8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:19:39.303065  195816 system_pods.go:74] duration metric: took 9.589202ms to wait for pod list to return data ...
	I0731 21:19:39.303075  195816 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:19:39.306693  195816 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:19:39.306720  195816 node_conditions.go:123] node cpu capacity is 2
	I0731 21:19:39.306732  195816 node_conditions.go:105] duration metric: took 3.649546ms to run NodePressure ...
	I0731 21:19:39.306756  195816 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 21:19:39.625239  195816 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:19:39.637663  195816 ops.go:34] apiserver oom_adj: -16
	I0731 21:19:39.637691  195816 kubeadm.go:597] duration metric: took 7.888952374s to restartPrimaryControlPlane
	I0731 21:19:39.637703  195816 kubeadm.go:394] duration metric: took 7.937393791s to StartCluster
	I0731 21:19:39.637725  195816 settings.go:142] acquiring lock: {Name:mk1f43113b3e416d694056e4890ac819b020378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:19:39.637805  195816 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 21:19:39.639388  195816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:19:39.639682  195816 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:19:39.639762  195816 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:19:39.639861  195816 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-586791"
	I0731 21:19:39.639892  195816 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-586791"
	W0731 21:19:39.639905  195816 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:19:39.639927  195816 addons.go:69] Setting metrics-server=true in profile "newest-cni-586791"
	I0731 21:19:39.639935  195816 config.go:182] Loaded profile config "newest-cni-586791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 21:19:39.639934  195816 addons.go:69] Setting dashboard=true in profile "newest-cni-586791"
	I0731 21:19:39.639930  195816 addons.go:69] Setting default-storageclass=true in profile "newest-cni-586791"
	I0731 21:19:39.639973  195816 addons.go:234] Setting addon metrics-server=true in "newest-cni-586791"
	W0731 21:19:39.639992  195816 addons.go:243] addon metrics-server should already be in state true
	I0731 21:19:39.639995  195816 addons.go:234] Setting addon dashboard=true in "newest-cni-586791"
	W0731 21:19:39.640004  195816 addons.go:243] addon dashboard should already be in state true
	I0731 21:19:39.640013  195816 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-586791"
	I0731 21:19:39.640028  195816 host.go:66] Checking if "newest-cni-586791" exists ...
	I0731 21:19:39.640028  195816 host.go:66] Checking if "newest-cni-586791" exists ...
	I0731 21:19:39.639938  195816 host.go:66] Checking if "newest-cni-586791" exists ...
	I0731 21:19:39.640424  195816 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:19:39.640445  195816 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:19:39.640456  195816 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:19:39.640468  195816 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:19:39.640497  195816 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:19:39.640546  195816 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:19:39.640551  195816 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:19:39.640576  195816 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:19:39.641413  195816 out.go:177] * Verifying Kubernetes components...
	I0731 21:19:39.642899  195816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:19:39.657576  195816 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37607
	I0731 21:19:39.658477  195816 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:19:39.659146  195816 main.go:141] libmachine: Using API Version  1
	I0731 21:19:39.659174  195816 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:19:39.659586  195816 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:19:39.659818  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetState
	I0731 21:19:39.660548  195816 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36535
	I0731 21:19:39.660704  195816 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37001
	I0731 21:19:39.660736  195816 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41187
	I0731 21:19:39.661140  195816 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:19:39.661756  195816 main.go:141] libmachine: Using API Version  1
	I0731 21:19:39.661777  195816 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:19:39.661797  195816 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:19:39.661870  195816 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:19:39.662171  195816 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:19:39.662355  195816 main.go:141] libmachine: Using API Version  1
	I0731 21:19:39.662397  195816 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:19:39.662371  195816 main.go:141] libmachine: Using API Version  1
	I0731 21:19:39.662459  195816 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:19:39.662790  195816 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:19:39.662817  195816 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:19:39.663246  195816 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:19:39.663248  195816 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:19:39.663832  195816 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:19:39.663878  195816 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:19:39.664245  195816 addons.go:234] Setting addon default-storageclass=true in "newest-cni-586791"
	W0731 21:19:39.664268  195816 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:19:39.664300  195816 host.go:66] Checking if "newest-cni-586791" exists ...
	I0731 21:19:39.664577  195816 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:19:39.664616  195816 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:19:39.664660  195816 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:19:39.664690  195816 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:19:39.680643  195816 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41131
	I0731 21:19:39.681577  195816 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:19:39.682391  195816 main.go:141] libmachine: Using API Version  1
	I0731 21:19:39.682415  195816 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:19:39.682785  195816 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:19:39.682964  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetState
	I0731 21:19:39.683573  195816 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38047
	I0731 21:19:39.684226  195816 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:19:39.684782  195816 main.go:141] libmachine: Using API Version  1
	I0731 21:19:39.684806  195816 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:19:39.685120  195816 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:19:39.685296  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetState
	I0731 21:19:39.685407  195816 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:19:39.685533  195816 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45907
	I0731 21:19:39.685968  195816 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:19:39.686519  195816 main.go:141] libmachine: Using API Version  1
	I0731 21:19:39.686539  195816 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:19:39.686954  195816 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:19:39.687010  195816 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33351
	I0731 21:19:39.687147  195816 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:19:39.687506  195816 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:19:39.687528  195816 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:19:39.687737  195816 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:19:39.687770  195816 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:19:39.687937  195816 main.go:141] libmachine: Using API Version  1
	I0731 21:19:39.687958  195816 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:19:39.688342  195816 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:19:39.688520  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetState
	I0731 21:19:39.688826  195816 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 21:19:39.688899  195816 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:19:39.688922  195816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:19:39.688941  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:19:39.690163  195816 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:19:39.690182  195816 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:19:39.690210  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:19:39.690529  195816 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:19:39.692998  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:39.693493  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:39.693527  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:39.693637  195816 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0731 21:19:39.694277  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:39.694367  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:19:39.694558  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:39.694626  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:39.694641  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:39.694717  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:19:39.694875  195816 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791/id_rsa Username:docker}
	I0731 21:19:39.695218  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:19:39.695857  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:39.696036  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:19:39.696207  195816 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791/id_rsa Username:docker}
	I0731 21:19:39.697797  195816 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0731 21:19:39.699210  195816 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0731 21:19:39.699226  195816 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0731 21:19:39.699239  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:19:39.702403  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:39.702771  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:39.702793  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:39.703049  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:19:39.703253  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:39.703432  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:19:39.703624  195816 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791/id_rsa Username:docker}
	I0731 21:19:39.707367  195816 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45589
	I0731 21:19:39.707839  195816 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:19:39.708393  195816 main.go:141] libmachine: Using API Version  1
	I0731 21:19:39.708421  195816 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:19:39.708786  195816 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:19:39.708987  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetState
	I0731 21:19:39.710855  195816 main.go:141] libmachine: (newest-cni-586791) Calling .DriverName
	I0731 21:19:39.711091  195816 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:19:39.711108  195816 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:19:39.711124  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHHostname
	I0731 21:19:39.714024  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:39.714424  195816 main.go:141] libmachine: (newest-cni-586791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:e4:c3", ip: ""} in network mk-newest-cni-586791: {Iface:virbr3 ExpiryTime:2024-07-31 22:19:17 +0000 UTC Type:0 Mac:52:54:00:c5:e4:c3 Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:newest-cni-586791 Clientid:01:52:54:00:c5:e4:c3}
	I0731 21:19:39.714448  195816 main.go:141] libmachine: (newest-cni-586791) DBG | domain newest-cni-586791 has defined IP address 192.168.61.136 and MAC address 52:54:00:c5:e4:c3 in network mk-newest-cni-586791
	I0731 21:19:39.714563  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHPort
	I0731 21:19:39.714755  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHKeyPath
	I0731 21:19:39.714900  195816 main.go:141] libmachine: (newest-cni-586791) Calling .GetSSHUsername
	I0731 21:19:39.715035  195816 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/newest-cni-586791/id_rsa Username:docker}
	I0731 21:19:39.833151  195816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:19:39.851102  195816 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:19:39.851198  195816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:19:39.865092  195816 api_server.go:72] duration metric: took 225.369543ms to wait for apiserver process to appear ...
	I0731 21:19:39.865115  195816 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:19:39.865134  195816 api_server.go:253] Checking apiserver healthz at https://192.168.61.136:8443/healthz ...
	I0731 21:19:39.870078  195816 api_server.go:279] https://192.168.61.136:8443/healthz returned 200:
	ok
	I0731 21:19:39.871223  195816 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 21:19:39.871241  195816 api_server.go:131] duration metric: took 6.119625ms to wait for apiserver health ...
	I0731 21:19:39.871250  195816 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:19:39.877669  195816 system_pods.go:59] 8 kube-system pods found
	I0731 21:19:39.877703  195816 system_pods.go:61] "coredns-5cfdc65f69-ncmmv" [9d4123f3-0bea-4ddc-9178-8ff3e8c2c903] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 21:19:39.877719  195816 system_pods.go:61] "etcd-newest-cni-586791" [33a5d651-e33e-4b97-9727-0587fccb79ef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 21:19:39.877730  195816 system_pods.go:61] "kube-apiserver-newest-cni-586791" [d1344d91-f88f-439b-8a35-3c3a5ba7c347] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 21:19:39.877743  195816 system_pods.go:61] "kube-controller-manager-newest-cni-586791" [2f13bf79-a075-464d-be20-3945de8a453b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 21:19:39.877749  195816 system_pods.go:61] "kube-proxy-5w5q8" [f6b5eab7-51b5-43ec-9e7d-c1489107d922] Running
	I0731 21:19:39.877768  195816 system_pods.go:61] "kube-scheduler-newest-cni-586791" [9fb1fafe-762b-40cd-bb68-4f5ab0f69d4b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 21:19:39.877780  195816 system_pods.go:61] "metrics-server-78fcd8795b-f9qfb" [6a57bd4b-35e4-41b8-898c-166e81df7e8c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:19:39.877797  195816 system_pods.go:61] "storage-provisioner" [fbc0ac03-73b2-4a78-8ff7-0f7bd55e91e8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 21:19:39.877808  195816 system_pods.go:74] duration metric: took 6.550593ms to wait for pod list to return data ...
	I0731 21:19:39.877819  195816 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:19:39.880834  195816 default_sa.go:45] found service account: "default"
	I0731 21:19:39.880851  195816 default_sa.go:55] duration metric: took 3.025891ms for default service account to be created ...
	I0731 21:19:39.880861  195816 kubeadm.go:582] duration metric: took 241.142673ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0731 21:19:39.880874  195816 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:19:39.883590  195816 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:19:39.883611  195816 node_conditions.go:123] node cpu capacity is 2
	I0731 21:19:39.883623  195816 node_conditions.go:105] duration metric: took 2.743937ms to run NodePressure ...
	I0731 21:19:39.883637  195816 start.go:241] waiting for startup goroutines ...
	I0731 21:19:39.943269  195816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:19:39.979984  195816 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0731 21:19:39.980010  195816 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0731 21:19:39.984368  195816 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:19:39.984394  195816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 21:19:39.995538  195816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:19:40.009058  195816 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0731 21:19:40.009084  195816 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0731 21:19:40.026040  195816 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:19:40.026068  195816 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:19:40.115073  195816 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0731 21:19:40.115101  195816 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0731 21:19:40.138998  195816 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:19:40.139023  195816 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:19:40.220042  195816 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0731 21:19:40.220064  195816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0731 21:19:40.228886  195816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:19:40.419638  195816 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0731 21:19:40.419668  195816 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0731 21:19:40.452603  195816 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0731 21:19:40.452638  195816 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0731 21:19:40.555206  195816 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0731 21:19:40.555246  195816 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0731 21:19:40.647218  195816 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0731 21:19:40.647255  195816 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0731 21:19:40.670656  195816 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0731 21:19:40.670683  195816 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0731 21:19:40.694003  195816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0731 21:19:41.836210  195816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.892902167s)
	I0731 21:19:41.836270  195816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.840691495s)
	I0731 21:19:41.836321  195816 main.go:141] libmachine: Making call to close driver server
	I0731 21:19:41.836339  195816 main.go:141] libmachine: (newest-cni-586791) Calling .Close
	I0731 21:19:41.836348  195816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.607437143s)
	I0731 21:19:41.836375  195816 main.go:141] libmachine: Making call to close driver server
	I0731 21:19:41.836390  195816 main.go:141] libmachine: (newest-cni-586791) Calling .Close
	I0731 21:19:41.836273  195816 main.go:141] libmachine: Making call to close driver server
	I0731 21:19:41.836432  195816 main.go:141] libmachine: (newest-cni-586791) Calling .Close
	I0731 21:19:41.836638  195816 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:19:41.836651  195816 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:19:41.836660  195816 main.go:141] libmachine: Making call to close driver server
	I0731 21:19:41.836666  195816 main.go:141] libmachine: (newest-cni-586791) Calling .Close
	I0731 21:19:41.836739  195816 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:19:41.836746  195816 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:19:41.836753  195816 main.go:141] libmachine: Making call to close driver server
	I0731 21:19:41.836760  195816 main.go:141] libmachine: (newest-cni-586791) Calling .Close
	I0731 21:19:41.836846  195816 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:19:41.836865  195816 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:19:41.836882  195816 main.go:141] libmachine: Making call to close driver server
	I0731 21:19:41.836895  195816 main.go:141] libmachine: (newest-cni-586791) Calling .Close
	I0731 21:19:41.836960  195816 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:19:41.836958  195816 main.go:141] libmachine: (newest-cni-586791) DBG | Closing plugin on server side
	I0731 21:19:41.836967  195816 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:19:41.836978  195816 addons.go:475] Verifying addon metrics-server=true in "newest-cni-586791"
	I0731 21:19:41.837003  195816 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:19:41.837011  195816 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:19:41.838495  195816 main.go:141] libmachine: (newest-cni-586791) DBG | Closing plugin on server side
	I0731 21:19:41.838530  195816 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:19:41.838538  195816 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:19:41.847001  195816 main.go:141] libmachine: Making call to close driver server
	I0731 21:19:41.847028  195816 main.go:141] libmachine: (newest-cni-586791) Calling .Close
	I0731 21:19:41.847329  195816 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:19:41.847346  195816 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:19:41.847353  195816 main.go:141] libmachine: (newest-cni-586791) DBG | Closing plugin on server side
	I0731 21:19:42.210821  195816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.516766631s)
	I0731 21:19:42.210888  195816 main.go:141] libmachine: Making call to close driver server
	I0731 21:19:42.210904  195816 main.go:141] libmachine: (newest-cni-586791) Calling .Close
	I0731 21:19:42.211326  195816 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:19:42.211345  195816 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:19:42.211356  195816 main.go:141] libmachine: Making call to close driver server
	I0731 21:19:42.211365  195816 main.go:141] libmachine: (newest-cni-586791) Calling .Close
	I0731 21:19:42.211624  195816 main.go:141] libmachine: (newest-cni-586791) DBG | Closing plugin on server side
	I0731 21:19:42.211679  195816 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:19:42.211690  195816 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:19:42.213543  195816 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-586791 addons enable metrics-server
	
	I0731 21:19:42.215059  195816 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0731 21:19:42.216697  195816 addons.go:510] duration metric: took 2.576947973s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0731 21:19:42.216739  195816 start.go:246] waiting for cluster config update ...
	I0731 21:19:42.216753  195816 start.go:255] writing updated cluster config ...
	I0731 21:19:42.217029  195816 ssh_runner.go:195] Run: rm -f paused
	I0731 21:19:42.277240  195816 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0731 21:19:42.278995  195816 out.go:177] * Done! kubectl is now configured to use "newest-cni-586791" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 21:19:53 embed-certs-831240 crio[737]: time="2024-07-31 21:19:53.618313405Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460793618292100,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d92fc05-76c0-4c7b-9b5d-5bc9febb0cb4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:19:53 embed-certs-831240 crio[737]: time="2024-07-31 21:19:53.619339079Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e127a78-95b1-46b7-8c43-14a2b7172943 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:19:53 embed-certs-831240 crio[737]: time="2024-07-31 21:19:53.619390620Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e127a78-95b1-46b7-8c43-14a2b7172943 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:19:53 embed-certs-831240 crio[737]: time="2024-07-31 21:19:53.619733845Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c232171f4c0eca21dc25a6c4d0f52c084e5a1a7af6d60912bf3730fc909b20e6,PodSandboxId:a1ba259d456e2257982823a55ecfd778b2259e15bb8f403822b55c895440d528,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722459608923995607,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9dc9efd-eba1-4457-8c17-44c18ddc2986,},Annotations:map[string]string{io.kubernetes.container.hash: 5666726c,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084,PodSandboxId:9346eed236060a0f0a3cf63e6c1507c75d7935b16321758e8f306783f7dd3c6d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459606116291685,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2ks55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5ad9d76-5cdc-430e-8933-7e72a2dda95f,},Annotations:map[string]string{io.kubernetes.container.hash: db490d7e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb,PodSandboxId:150427d1d3a85e8448c9603e3d2bdaae011a8c63ad5cb5d83e844866dd6b3467,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722459599180433548,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: d3d5fa24-96e8-4ab5-9887-62ff8b82f21d,},Annotations:map[string]string{io.kubernetes.container.hash: f6a709a3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2,PodSandboxId:150427d1d3a85e8448c9603e3d2bdaae011a8c63ad5cb5d83e844866dd6b3467,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722459598605937073,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d3d5fa24-96e8-4ab5-9887-62ff8b82f21d,},Annotations:map[string]string{io.kubernetes.container.hash: f6a709a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845,PodSandboxId:4c6d4159b4e576518d77c4b7bc80dfd9b4dff64edb90b61ca3d7a24e86ca1a0e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722459598560470090,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x662j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ad0d8a8-94b4-4f3e-b5da-4e5585c28
d21,},Annotations:map[string]string{io.kubernetes.container.hash: f9c9821,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5,PodSandboxId:7b1d539936fe61442a4d02b8a0b417149eb06f3015c44faf114a78d0318600ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722459593901395850,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e4acf8178011ec8033f5125bfb2873e,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e,PodSandboxId:126e48bab73273201a9f8f02134dd9861d34773b79946e5bdd0b02b33b02bdbc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722459593892773295,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe6ee627ad68fa4b9c68b699e5ec6f11,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: c24fb674,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473,PodSandboxId:47c3808cbb510b243766ce95854494a3ac6c0f6f82299b2da0e5a23884cc3674,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722459593906548374,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2379a05be72742c63e504be6c05a56c0,},Annotations:map[string]string{io.kubernetes.container.hash: 8
701a33f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f,PodSandboxId:3603f0355113109c8b0f2f2a3c6c74ea1e1e58426d061ad4f10dc3bca2780ff8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722459593903235672,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cb75276836c0666f9aaf558c691b62a,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7e127a78-95b1-46b7-8c43-14a2b7172943 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:19:53 embed-certs-831240 crio[737]: time="2024-07-31 21:19:53.672407795Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bf2fc6ef-6909-4cbc-878f-1fc865c8ed28 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:19:53 embed-certs-831240 crio[737]: time="2024-07-31 21:19:53.672484798Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf2fc6ef-6909-4cbc-878f-1fc865c8ed28 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:19:53 embed-certs-831240 crio[737]: time="2024-07-31 21:19:53.673540637Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ac5309e0-e95e-4f59-8876-4501963eff6a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:19:53 embed-certs-831240 crio[737]: time="2024-07-31 21:19:53.674077804Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460793674055897,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ac5309e0-e95e-4f59-8876-4501963eff6a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:19:53 embed-certs-831240 crio[737]: time="2024-07-31 21:19:53.674758015Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67cfaf62-ad52-4530-bae2-f9ba8eea657e name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:19:53 embed-certs-831240 crio[737]: time="2024-07-31 21:19:53.674811679Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67cfaf62-ad52-4530-bae2-f9ba8eea657e name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:19:53 embed-certs-831240 crio[737]: time="2024-07-31 21:19:53.676802491Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c232171f4c0eca21dc25a6c4d0f52c084e5a1a7af6d60912bf3730fc909b20e6,PodSandboxId:a1ba259d456e2257982823a55ecfd778b2259e15bb8f403822b55c895440d528,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722459608923995607,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9dc9efd-eba1-4457-8c17-44c18ddc2986,},Annotations:map[string]string{io.kubernetes.container.hash: 5666726c,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084,PodSandboxId:9346eed236060a0f0a3cf63e6c1507c75d7935b16321758e8f306783f7dd3c6d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459606116291685,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2ks55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5ad9d76-5cdc-430e-8933-7e72a2dda95f,},Annotations:map[string]string{io.kubernetes.container.hash: db490d7e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb,PodSandboxId:150427d1d3a85e8448c9603e3d2bdaae011a8c63ad5cb5d83e844866dd6b3467,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722459599180433548,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: d3d5fa24-96e8-4ab5-9887-62ff8b82f21d,},Annotations:map[string]string{io.kubernetes.container.hash: f6a709a3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2,PodSandboxId:150427d1d3a85e8448c9603e3d2bdaae011a8c63ad5cb5d83e844866dd6b3467,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722459598605937073,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d3d5fa24-96e8-4ab5-9887-62ff8b82f21d,},Annotations:map[string]string{io.kubernetes.container.hash: f6a709a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845,PodSandboxId:4c6d4159b4e576518d77c4b7bc80dfd9b4dff64edb90b61ca3d7a24e86ca1a0e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722459598560470090,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x662j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ad0d8a8-94b4-4f3e-b5da-4e5585c28
d21,},Annotations:map[string]string{io.kubernetes.container.hash: f9c9821,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5,PodSandboxId:7b1d539936fe61442a4d02b8a0b417149eb06f3015c44faf114a78d0318600ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722459593901395850,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e4acf8178011ec8033f5125bfb2873e,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e,PodSandboxId:126e48bab73273201a9f8f02134dd9861d34773b79946e5bdd0b02b33b02bdbc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722459593892773295,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe6ee627ad68fa4b9c68b699e5ec6f11,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: c24fb674,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473,PodSandboxId:47c3808cbb510b243766ce95854494a3ac6c0f6f82299b2da0e5a23884cc3674,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722459593906548374,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2379a05be72742c63e504be6c05a56c0,},Annotations:map[string]string{io.kubernetes.container.hash: 8
701a33f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f,PodSandboxId:3603f0355113109c8b0f2f2a3c6c74ea1e1e58426d061ad4f10dc3bca2780ff8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722459593903235672,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cb75276836c0666f9aaf558c691b62a,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=67cfaf62-ad52-4530-bae2-f9ba8eea657e name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:19:53 embed-certs-831240 crio[737]: time="2024-07-31 21:19:53.719540235Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d6e7cf24-41fa-48fe-ab04-3759be8dc1d4 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:19:53 embed-certs-831240 crio[737]: time="2024-07-31 21:19:53.719613180Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d6e7cf24-41fa-48fe-ab04-3759be8dc1d4 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:19:53 embed-certs-831240 crio[737]: time="2024-07-31 21:19:53.720931852Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6bebd5fb-c18d-452d-aaa8-78ef6509cba0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:19:53 embed-certs-831240 crio[737]: time="2024-07-31 21:19:53.721313131Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460793721291408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6bebd5fb-c18d-452d-aaa8-78ef6509cba0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:19:53 embed-certs-831240 crio[737]: time="2024-07-31 21:19:53.722036892Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dbdda9d7-3016-4424-b4b2-242e7979d335 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:19:53 embed-certs-831240 crio[737]: time="2024-07-31 21:19:53.722087719Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dbdda9d7-3016-4424-b4b2-242e7979d335 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:19:53 embed-certs-831240 crio[737]: time="2024-07-31 21:19:53.722273317Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c232171f4c0eca21dc25a6c4d0f52c084e5a1a7af6d60912bf3730fc909b20e6,PodSandboxId:a1ba259d456e2257982823a55ecfd778b2259e15bb8f403822b55c895440d528,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722459608923995607,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9dc9efd-eba1-4457-8c17-44c18ddc2986,},Annotations:map[string]string{io.kubernetes.container.hash: 5666726c,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084,PodSandboxId:9346eed236060a0f0a3cf63e6c1507c75d7935b16321758e8f306783f7dd3c6d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459606116291685,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2ks55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5ad9d76-5cdc-430e-8933-7e72a2dda95f,},Annotations:map[string]string{io.kubernetes.container.hash: db490d7e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb,PodSandboxId:150427d1d3a85e8448c9603e3d2bdaae011a8c63ad5cb5d83e844866dd6b3467,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722459599180433548,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: d3d5fa24-96e8-4ab5-9887-62ff8b82f21d,},Annotations:map[string]string{io.kubernetes.container.hash: f6a709a3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2,PodSandboxId:150427d1d3a85e8448c9603e3d2bdaae011a8c63ad5cb5d83e844866dd6b3467,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722459598605937073,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d3d5fa24-96e8-4ab5-9887-62ff8b82f21d,},Annotations:map[string]string{io.kubernetes.container.hash: f6a709a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845,PodSandboxId:4c6d4159b4e576518d77c4b7bc80dfd9b4dff64edb90b61ca3d7a24e86ca1a0e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722459598560470090,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x662j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ad0d8a8-94b4-4f3e-b5da-4e5585c28
d21,},Annotations:map[string]string{io.kubernetes.container.hash: f9c9821,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5,PodSandboxId:7b1d539936fe61442a4d02b8a0b417149eb06f3015c44faf114a78d0318600ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722459593901395850,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e4acf8178011ec8033f5125bfb2873e,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e,PodSandboxId:126e48bab73273201a9f8f02134dd9861d34773b79946e5bdd0b02b33b02bdbc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722459593892773295,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe6ee627ad68fa4b9c68b699e5ec6f11,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: c24fb674,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473,PodSandboxId:47c3808cbb510b243766ce95854494a3ac6c0f6f82299b2da0e5a23884cc3674,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722459593906548374,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2379a05be72742c63e504be6c05a56c0,},Annotations:map[string]string{io.kubernetes.container.hash: 8
701a33f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f,PodSandboxId:3603f0355113109c8b0f2f2a3c6c74ea1e1e58426d061ad4f10dc3bca2780ff8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722459593903235672,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cb75276836c0666f9aaf558c691b62a,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dbdda9d7-3016-4424-b4b2-242e7979d335 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:19:53 embed-certs-831240 crio[737]: time="2024-07-31 21:19:53.758893258Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=db14c569-c398-4891-9876-3a8b4fc8b341 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:19:53 embed-certs-831240 crio[737]: time="2024-07-31 21:19:53.758975466Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=db14c569-c398-4891-9876-3a8b4fc8b341 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:19:53 embed-certs-831240 crio[737]: time="2024-07-31 21:19:53.760371242Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6d936afc-c513-4183-b97e-f2e8b2280369 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:19:53 embed-certs-831240 crio[737]: time="2024-07-31 21:19:53.760792767Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460793760773647,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6d936afc-c513-4183-b97e-f2e8b2280369 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:19:53 embed-certs-831240 crio[737]: time="2024-07-31 21:19:53.761310039Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2663aae9-be1a-4848-ad06-81f4e9654e80 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:19:53 embed-certs-831240 crio[737]: time="2024-07-31 21:19:53.761359033Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2663aae9-be1a-4848-ad06-81f4e9654e80 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:19:53 embed-certs-831240 crio[737]: time="2024-07-31 21:19:53.761558992Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c232171f4c0eca21dc25a6c4d0f52c084e5a1a7af6d60912bf3730fc909b20e6,PodSandboxId:a1ba259d456e2257982823a55ecfd778b2259e15bb8f403822b55c895440d528,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722459608923995607,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e9dc9efd-eba1-4457-8c17-44c18ddc2986,},Annotations:map[string]string{io.kubernetes.container.hash: 5666726c,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084,PodSandboxId:9346eed236060a0f0a3cf63e6c1507c75d7935b16321758e8f306783f7dd3c6d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722459606116291685,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2ks55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5ad9d76-5cdc-430e-8933-7e72a2dda95f,},Annotations:map[string]string{io.kubernetes.container.hash: db490d7e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb,PodSandboxId:150427d1d3a85e8448c9603e3d2bdaae011a8c63ad5cb5d83e844866dd6b3467,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722459599180433548,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: d3d5fa24-96e8-4ab5-9887-62ff8b82f21d,},Annotations:map[string]string{io.kubernetes.container.hash: f6a709a3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2,PodSandboxId:150427d1d3a85e8448c9603e3d2bdaae011a8c63ad5cb5d83e844866dd6b3467,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722459598605937073,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
d3d5fa24-96e8-4ab5-9887-62ff8b82f21d,},Annotations:map[string]string{io.kubernetes.container.hash: f6a709a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845,PodSandboxId:4c6d4159b4e576518d77c4b7bc80dfd9b4dff64edb90b61ca3d7a24e86ca1a0e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722459598560470090,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x662j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ad0d8a8-94b4-4f3e-b5da-4e5585c28
d21,},Annotations:map[string]string{io.kubernetes.container.hash: f9c9821,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5,PodSandboxId:7b1d539936fe61442a4d02b8a0b417149eb06f3015c44faf114a78d0318600ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722459593901395850,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e4acf8178011ec8033f5125bfb2873e,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e,PodSandboxId:126e48bab73273201a9f8f02134dd9861d34773b79946e5bdd0b02b33b02bdbc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722459593892773295,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe6ee627ad68fa4b9c68b699e5ec6f11,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: c24fb674,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473,PodSandboxId:47c3808cbb510b243766ce95854494a3ac6c0f6f82299b2da0e5a23884cc3674,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722459593906548374,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2379a05be72742c63e504be6c05a56c0,},Annotations:map[string]string{io.kubernetes.container.hash: 8
701a33f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f,PodSandboxId:3603f0355113109c8b0f2f2a3c6c74ea1e1e58426d061ad4f10dc3bca2780ff8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722459593903235672,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-831240,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cb75276836c0666f9aaf558c691b62a,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2663aae9-be1a-4848-ad06-81f4e9654e80 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c232171f4c0ec       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   a1ba259d456e2       busybox
	1a7f319ba94b3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      19 minutes ago      Running             coredns                   1                   9346eed236060       coredns-7db6d8ff4d-2ks55
	919f3cf1d058c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       2                   150427d1d3a85       storage-provisioner
	c0ca8e260d6f1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       1                   150427d1d3a85       storage-provisioner
	b51b7e8b0ab34       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      19 minutes ago      Running             kube-proxy                1                   4c6d4159b4e57       kube-proxy-x662j
	dafbb34397064       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      19 minutes ago      Running             kube-apiserver            1                   47c3808cbb510       kube-apiserver-embed-certs-831240
	0854d075486b3       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      19 minutes ago      Running             kube-controller-manager   1                   3603f03551131       kube-controller-manager-embed-certs-831240
	3ac0d9edc6a97       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      19 minutes ago      Running             kube-scheduler            1                   7b1d539936fe6       kube-scheduler-embed-certs-831240
	7544698b6925d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      19 minutes ago      Running             etcd                      1                   126e48bab7327       etcd-embed-certs-831240
	
	
	==> coredns [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35877 - 465 "HINFO IN 3264330224851131081.6087925659700021598. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01634638s
	
	
	==> describe nodes <==
	Name:               embed-certs-831240
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-831240
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825
	                    minikube.k8s.io/name=embed-certs-831240
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T20_50_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 20:50:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-831240
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 21:19:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 21:15:47 +0000   Wed, 31 Jul 2024 20:50:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 21:15:47 +0000   Wed, 31 Jul 2024 20:50:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 21:15:47 +0000   Wed, 31 Jul 2024 20:50:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 21:15:47 +0000   Wed, 31 Jul 2024 21:00:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.92
	  Hostname:    embed-certs-831240
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3ba3feff689b407f95c0441506aeade9
	  System UUID:                3ba3feff-689b-407f-95c0-441506aeade9
	  Boot ID:                    3d58d390-3b96-4c0d-8218-86dbdef3d594
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-7db6d8ff4d-2ks55                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-embed-certs-831240                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-embed-certs-831240             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-embed-certs-831240    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-x662j                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-embed-certs-831240             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-569cc877fc-slbkm               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-831240 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-831240 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-831240 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node embed-certs-831240 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node embed-certs-831240 event: Registered Node embed-certs-831240 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node embed-certs-831240 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node embed-certs-831240 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node embed-certs-831240 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node embed-certs-831240 event: Registered Node embed-certs-831240 in Controller
	
	
	==> dmesg <==
	[Jul31 20:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055797] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043149] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.146117] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.581222] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.601985] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.392567] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.060909] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.079130] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.163159] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +0.144212] systemd-fstab-generator[691]: Ignoring "noauto" option for root device
	[  +0.283662] systemd-fstab-generator[720]: Ignoring "noauto" option for root device
	[  +4.405540] systemd-fstab-generator[817]: Ignoring "noauto" option for root device
	[  +0.072373] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.778971] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +5.671234] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.341414] systemd-fstab-generator[1608]: Ignoring "noauto" option for root device
	[Jul31 21:00] kauditd_printk_skb: 67 callbacks suppressed
	[  +6.555378] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e] <==
	{"level":"info","ts":"2024-07-31T20:59:55.914588Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T20:59:55.914985Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.92:2379"}
	{"level":"info","ts":"2024-07-31T21:09:55.940411Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":834}
	{"level":"info","ts":"2024-07-31T21:09:55.951471Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":834,"took":"10.646857ms","hash":3126182695,"current-db-size-bytes":2621440,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2621440,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-07-31T21:09:55.951531Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3126182695,"revision":834,"compact-revision":-1}
	{"level":"info","ts":"2024-07-31T21:14:55.949845Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1077}
	{"level":"info","ts":"2024-07-31T21:14:55.953752Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1077,"took":"3.552892ms","hash":2142498061,"current-db-size-bytes":2621440,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1568768,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-31T21:14:55.953809Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2142498061,"revision":1077,"compact-revision":834}
	{"level":"info","ts":"2024-07-31T21:18:39.794219Z","caller":"traceutil/trace.go:171","msg":"trace[709890567] transaction","detail":"{read_only:false; response_revision:1503; number_of_response:1; }","duration":"106.441739ms","start":"2024-07-31T21:18:39.687725Z","end":"2024-07-31T21:18:39.794167Z","steps":["trace[709890567] 'process raft request'  (duration: 106.333698ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T21:18:40.845579Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.292768ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T21:18:40.845739Z","caller":"traceutil/trace.go:171","msg":"trace[1587955124] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1504; }","duration":"175.571475ms","start":"2024-07-31T21:18:40.670152Z","end":"2024-07-31T21:18:40.845724Z","steps":["trace[1587955124] 'range keys from in-memory index tree'  (duration: 175.189582ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T21:18:42.807078Z","caller":"traceutil/trace.go:171","msg":"trace[717529705] linearizableReadLoop","detail":"{readStateIndex:1770; appliedIndex:1769; }","duration":"137.159477ms","start":"2024-07-31T21:18:42.669902Z","end":"2024-07-31T21:18:42.807061Z","steps":["trace[717529705] 'read index received'  (duration: 136.90524ms)","trace[717529705] 'applied index is now lower than readState.Index'  (duration: 253.601µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T21:18:42.807225Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.255059ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T21:18:42.807248Z","caller":"traceutil/trace.go:171","msg":"trace[414304190] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1505; }","duration":"137.367493ms","start":"2024-07-31T21:18:42.669875Z","end":"2024-07-31T21:18:42.807242Z","steps":["trace[414304190] 'agreement among raft nodes before linearized reading'  (duration: 137.26404ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T21:18:42.807462Z","caller":"traceutil/trace.go:171","msg":"trace[397708894] transaction","detail":"{read_only:false; response_revision:1505; number_of_response:1; }","duration":"210.649418ms","start":"2024-07-31T21:18:42.596804Z","end":"2024-07-31T21:18:42.807454Z","steps":["trace[397708894] 'process raft request'  (duration: 210.093705ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T21:19:33.536729Z","caller":"traceutil/trace.go:171","msg":"trace[57106837] transaction","detail":"{read_only:false; response_revision:1545; number_of_response:1; }","duration":"469.608667ms","start":"2024-07-31T21:19:33.067043Z","end":"2024-07-31T21:19:33.536651Z","steps":["trace[57106837] 'process raft request'  (duration: 469.270503ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T21:19:33.537598Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T21:19:33.06703Z","time spent":"469.818857ms","remote":"127.0.0.1:44838","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1544 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-07-31T21:19:33.916497Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.695532ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T21:19:33.916573Z","caller":"traceutil/trace.go:171","msg":"trace[1931553197] range","detail":"{range_begin:/registry/networkpolicies/; range_end:/registry/networkpolicies0; response_count:0; response_revision:1545; }","duration":"119.919149ms","start":"2024-07-31T21:19:33.796642Z","end":"2024-07-31T21:19:33.916562Z","steps":["trace[1931553197] 'count revisions from in-memory index tree'  (duration: 119.628046ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T21:19:33.916781Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"247.131664ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T21:19:33.916819Z","caller":"traceutil/trace.go:171","msg":"trace[1653536924] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1545; }","duration":"247.19595ms","start":"2024-07-31T21:19:33.669617Z","end":"2024-07-31T21:19:33.916813Z","steps":["trace[1653536924] 'range keys from in-memory index tree'  (duration: 246.987927ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T21:19:34.043728Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.721112ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11042141336061788900 > lease_revoke:<id:193d910a979ec296>","response":"size:29"}
	{"level":"info","ts":"2024-07-31T21:19:34.043804Z","caller":"traceutil/trace.go:171","msg":"trace[1439393179] linearizableReadLoop","detail":"{readStateIndex:1821; appliedIndex:1820; }","duration":"126.143317ms","start":"2024-07-31T21:19:33.91765Z","end":"2024-07-31T21:19:34.043794Z","steps":["trace[1439393179] 'read index received'  (duration: 4.104666ms)","trace[1439393179] 'applied index is now lower than readState.Index'  (duration: 122.037883ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T21:19:34.043856Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.222133ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T21:19:34.043869Z","caller":"traceutil/trace.go:171","msg":"trace[1710708635] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1545; }","duration":"126.260837ms","start":"2024-07-31T21:19:33.917603Z","end":"2024-07-31T21:19:34.043864Z","steps":["trace[1710708635] 'agreement among raft nodes before linearized reading'  (duration: 126.223429ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:19:54 up 20 min,  0 users,  load average: 0.18, 0.12, 0.09
	Linux embed-certs-831240 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473] <==
	I0731 21:12:58.305725       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:14:57.310504       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:14:57.310610       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0731 21:14:58.310977       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:14:58.311121       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 21:14:58.311163       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:14:58.311019       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:14:58.311279       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 21:14:58.312531       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:15:58.312357       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:15:58.312431       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 21:15:58.312440       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:15:58.313554       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:15:58.313732       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 21:15:58.313765       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:17:58.313483       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:17:58.313563       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0731 21:17:58.313571       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0731 21:17:58.314754       1 handler_proxy.go:93] no RequestInfo found in the context
	E0731 21:17:58.314832       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0731 21:17:58.314838       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f] <==
	I0731 21:14:11.207765       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:14:40.712866       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:14:41.219228       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:15:10.719175       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:15:11.226253       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:15:40.724381       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:15:41.233821       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:16:10.730000       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:16:11.242117       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0731 21:16:15.104732       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="330.963µs"
	I0731 21:16:26.099938       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="208.405µs"
	E0731 21:16:40.734854       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:16:41.250371       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:17:10.740647       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:17:11.260379       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:17:40.745746       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:17:41.268429       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:18:10.750405       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:18:11.277079       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:18:40.756206       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:18:41.285215       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:19:10.762253       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:19:11.295715       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0731 21:19:40.770220       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0731 21:19:41.306331       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845] <==
	I0731 20:59:58.755536       1 server_linux.go:69] "Using iptables proxy"
	I0731 20:59:58.765857       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.92"]
	I0731 20:59:58.798976       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 20:59:58.799019       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 20:59:58.799034       1 server_linux.go:165] "Using iptables Proxier"
	I0731 20:59:58.801627       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 20:59:58.801911       1 server.go:872] "Version info" version="v1.30.3"
	I0731 20:59:58.801936       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 20:59:58.804372       1 config.go:192] "Starting service config controller"
	I0731 20:59:58.804411       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 20:59:58.804455       1 config.go:101] "Starting endpoint slice config controller"
	I0731 20:59:58.804472       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 20:59:58.806416       1 config.go:319] "Starting node config controller"
	I0731 20:59:58.806448       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 20:59:58.905388       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 20:59:58.905467       1 shared_informer.go:320] Caches are synced for service config
	I0731 20:59:58.907001       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5] <==
	I0731 20:59:57.279856       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 20:59:57.279952       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0731 20:59:57.291326       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 20:59:57.291269       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 20:59:57.291459       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 20:59:57.291526       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 20:59:57.291750       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 20:59:57.291851       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 20:59:57.291860       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 20:59:57.291781       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 20:59:57.292060       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 20:59:57.292088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 20:59:57.292156       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 20:59:57.292872       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 20:59:57.292358       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 20:59:57.292906       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 20:59:57.292456       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 20:59:57.292995       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 20:59:57.292490       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 20:59:57.293010       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 20:59:57.292582       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 20:59:57.293095       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 20:59:57.292765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 20:59:57.293108       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0731 20:59:57.380803       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 21:17:46 embed-certs-831240 kubelet[948]: E0731 21:17:46.081223     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-slbkm" podUID="f93f674b-1f0e-443b-ac06-9c2a5234eeea"
	Jul 31 21:17:53 embed-certs-831240 kubelet[948]: E0731 21:17:53.108740     948 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 21:17:53 embed-certs-831240 kubelet[948]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:17:53 embed-certs-831240 kubelet[948]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:17:53 embed-certs-831240 kubelet[948]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:17:53 embed-certs-831240 kubelet[948]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:17:58 embed-certs-831240 kubelet[948]: E0731 21:17:58.081808     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-slbkm" podUID="f93f674b-1f0e-443b-ac06-9c2a5234eeea"
	Jul 31 21:18:12 embed-certs-831240 kubelet[948]: E0731 21:18:12.081607     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-slbkm" podUID="f93f674b-1f0e-443b-ac06-9c2a5234eeea"
	Jul 31 21:18:26 embed-certs-831240 kubelet[948]: E0731 21:18:26.080608     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-slbkm" podUID="f93f674b-1f0e-443b-ac06-9c2a5234eeea"
	Jul 31 21:18:38 embed-certs-831240 kubelet[948]: E0731 21:18:38.083780     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-slbkm" podUID="f93f674b-1f0e-443b-ac06-9c2a5234eeea"
	Jul 31 21:18:51 embed-certs-831240 kubelet[948]: E0731 21:18:51.080770     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-slbkm" podUID="f93f674b-1f0e-443b-ac06-9c2a5234eeea"
	Jul 31 21:18:53 embed-certs-831240 kubelet[948]: E0731 21:18:53.107187     948 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 21:18:53 embed-certs-831240 kubelet[948]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:18:53 embed-certs-831240 kubelet[948]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:18:53 embed-certs-831240 kubelet[948]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:18:53 embed-certs-831240 kubelet[948]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 21:19:04 embed-certs-831240 kubelet[948]: E0731 21:19:04.082143     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-slbkm" podUID="f93f674b-1f0e-443b-ac06-9c2a5234eeea"
	Jul 31 21:19:18 embed-certs-831240 kubelet[948]: E0731 21:19:18.081773     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-slbkm" podUID="f93f674b-1f0e-443b-ac06-9c2a5234eeea"
	Jul 31 21:19:33 embed-certs-831240 kubelet[948]: E0731 21:19:33.084526     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-slbkm" podUID="f93f674b-1f0e-443b-ac06-9c2a5234eeea"
	Jul 31 21:19:45 embed-certs-831240 kubelet[948]: E0731 21:19:45.083074     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-slbkm" podUID="f93f674b-1f0e-443b-ac06-9c2a5234eeea"
	Jul 31 21:19:53 embed-certs-831240 kubelet[948]: E0731 21:19:53.105495     948 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 21:19:53 embed-certs-831240 kubelet[948]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 21:19:53 embed-certs-831240 kubelet[948]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 21:19:53 embed-certs-831240 kubelet[948]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 21:19:53 embed-certs-831240 kubelet[948]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb] <==
	I0731 20:59:59.377369       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 20:59:59.399798       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 20:59:59.399907       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 21:00:16.807008       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 21:00:16.807203       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-831240_33fb3b93-9780-45ba-addc-4cd2a27f806b!
	I0731 21:00:16.808524       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3371e2f2-9fef-4856-9b93-ff0c113558f7", APIVersion:"v1", ResourceVersion:"586", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-831240_33fb3b93-9780-45ba-addc-4cd2a27f806b became leader
	I0731 21:00:16.908361       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-831240_33fb3b93-9780-45ba-addc-4cd2a27f806b!
	
	
	==> storage-provisioner [c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2] <==
	I0731 20:59:58.711340       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0731 20:59:58.715150       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-831240 -n embed-certs-831240
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-831240 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-slbkm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-831240 describe pod metrics-server-569cc877fc-slbkm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-831240 describe pod metrics-server-569cc877fc-slbkm: exit status 1 (60.07569ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-slbkm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-831240 describe pod metrics-server-569cc877fc-slbkm: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (384.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (88.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0731 21:17:11.938301  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/calico-341849/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0731 21:17:30.947243  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/custom-flannel-341849/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
E0731 21:17:34.577621  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.51:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-239115 -n old-k8s-version-239115
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-239115 -n old-k8s-version-239115: exit status 2 (244.417305ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-239115" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-239115 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-239115 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.941µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-239115 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-239115 -n old-k8s-version-239115
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-239115 -n old-k8s-version-239115: exit status 2 (218.963383ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-239115 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-239115 logs -n 25: (1.58283331s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-341849 sudo                                  | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC |                     |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-341849 sudo                                  | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-831240                                  | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:50 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| ssh     | -p bridge-341849 sudo                                  | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-341849 sudo find                             | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-341849 sudo crio                             | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-341849                                       | bridge-341849                | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-248084 | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:49 UTC |
	|         | disable-driver-mounts-248084                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:49 UTC | 31 Jul 24 20:51 UTC |
	|         | default-k8s-diff-port-125614                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-831240            | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC | 31 Jul 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-831240                                  | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-916885             | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC | 31 Jul 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-916885                                   | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-125614  | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC | 31 Jul 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:51 UTC |                     |
	|         | default-k8s-diff-port-125614                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-239115        | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-831240                 | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-831240                                  | embed-certs-831240           | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC | 31 Jul 24 21:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-916885                  | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 20:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-916885 --memory=2200                     | no-preload-916885            | jenkins | v1.33.1 | 31 Jul 24 20:54 UTC | 31 Jul 24 21:04 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-125614       | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-125614 | jenkins | v1.33.1 | 31 Jul 24 20:54 UTC | 31 Jul 24 21:03 UTC |
	|         | default-k8s-diff-port-125614                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-239115                              | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:55 UTC | 31 Jul 24 20:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-239115             | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:55 UTC | 31 Jul 24 20:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-239115                              | old-k8s-version-239115       | jenkins | v1.33.1 | 31 Jul 24 20:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 20:55:13
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 20:55:13.835355  188656 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:55:13.835514  188656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:55:13.835525  188656 out.go:304] Setting ErrFile to fd 2...
	I0731 20:55:13.835531  188656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:55:13.835717  188656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 20:55:13.836233  188656 out.go:298] Setting JSON to false
	I0731 20:55:13.837146  188656 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9450,"bootTime":1722449864,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 20:55:13.837206  188656 start.go:139] virtualization: kvm guest
	I0731 20:55:13.839094  188656 out.go:177] * [old-k8s-version-239115] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 20:55:13.840630  188656 notify.go:220] Checking for updates...
	I0731 20:55:13.840638  188656 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 20:55:13.841884  188656 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 20:55:13.843054  188656 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 20:55:13.844295  188656 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 20:55:13.845348  188656 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 20:55:13.846480  188656 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 20:55:13.847974  188656 config.go:182] Loaded profile config "old-k8s-version-239115": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 20:55:13.848349  188656 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:55:13.848390  188656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:55:13.863017  188656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46065
	I0731 20:55:13.863418  188656 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:55:13.863927  188656 main.go:141] libmachine: Using API Version  1
	I0731 20:55:13.863980  188656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:55:13.864357  188656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:55:13.864625  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:55:13.866178  188656 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 20:55:13.867248  188656 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 20:55:13.867523  188656 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:55:13.867552  188656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:55:13.881922  188656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44705
	I0731 20:55:13.882304  188656 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:55:13.882707  188656 main.go:141] libmachine: Using API Version  1
	I0731 20:55:13.882729  188656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:55:13.883037  188656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:55:13.883214  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:55:13.917067  188656 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 20:55:13.918247  188656 start.go:297] selected driver: kvm2
	I0731 20:55:13.918260  188656 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-239115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:55:13.918396  188656 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 20:55:13.919323  188656 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:55:13.919428  188656 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-121704/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 20:55:13.934150  188656 install.go:137] /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0731 20:55:13.934506  188656 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 20:55:13.934569  188656 cni.go:84] Creating CNI manager for ""
	I0731 20:55:13.934583  188656 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:55:13.934630  188656 start.go:340] cluster config:
	{Name:old-k8s-version-239115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239115 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:55:13.934737  188656 iso.go:125] acquiring lock: {Name:mk69fc0fe37180b19ce91307b5e12ab2d0bd69fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 20:55:13.936401  188656 out.go:177] * Starting "old-k8s-version-239115" primary control-plane node in "old-k8s-version-239115" cluster
	I0731 20:55:13.769565  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:13.937700  188656 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 20:55:13.937735  188656 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0731 20:55:13.937743  188656 cache.go:56] Caching tarball of preloaded images
	I0731 20:55:13.937806  188656 preload.go:172] Found /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 20:55:13.937816  188656 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0731 20:55:13.937907  188656 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/config.json ...
	I0731 20:55:13.938068  188656 start.go:360] acquireMachinesLock for old-k8s-version-239115: {Name:mk0ee20c9dba367bb5e62f6affdfe6f589095d2a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 20:55:19.845616  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:22.917614  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:28.997601  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:32.069596  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:38.149607  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:41.221579  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:47.301587  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:50.373695  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:56.453611  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:55:59.525649  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:05.605640  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:08.677654  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:14.757599  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:17.829627  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:23.909581  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:26.981613  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:33.061611  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:36.133597  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:42.213638  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:45.285703  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:51.365653  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:56:54.437615  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:00.517627  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:03.589595  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:09.669666  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:12.741661  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:18.821643  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:21.893594  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:27.973636  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:31.045651  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:37.125619  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:40.197656  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:46.277679  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:49.349535  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:55.429634  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:57:58.501611  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:58:04.581620  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:58:07.653642  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:58:13.733571  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:58:16.805674  187862 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.92:22: connect: no route to host
	I0731 20:58:19.809697  188133 start.go:364] duration metric: took 4m15.439364065s to acquireMachinesLock for "no-preload-916885"
	I0731 20:58:19.809748  188133 start.go:96] Skipping create...Using existing machine configuration
	I0731 20:58:19.809756  188133 fix.go:54] fixHost starting: 
	I0731 20:58:19.810113  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:58:19.810149  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:58:19.825131  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40671
	I0731 20:58:19.825615  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:58:19.826110  188133 main.go:141] libmachine: Using API Version  1
	I0731 20:58:19.826132  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:58:19.826439  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:58:19.826616  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:19.826840  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetState
	I0731 20:58:19.828267  188133 fix.go:112] recreateIfNeeded on no-preload-916885: state=Stopped err=<nil>
	I0731 20:58:19.828294  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	W0731 20:58:19.828471  188133 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 20:58:19.829957  188133 out.go:177] * Restarting existing kvm2 VM for "no-preload-916885" ...
	I0731 20:58:19.807506  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:58:19.807579  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetMachineName
	I0731 20:58:19.807919  187862 buildroot.go:166] provisioning hostname "embed-certs-831240"
	I0731 20:58:19.807946  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetMachineName
	I0731 20:58:19.808126  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:58:19.809580  187862 machine.go:97] duration metric: took 4m37.431426503s to provisionDockerMachine
	I0731 20:58:19.809625  187862 fix.go:56] duration metric: took 4m37.4520345s for fixHost
	I0731 20:58:19.809631  187862 start.go:83] releasing machines lock for "embed-certs-831240", held for 4m37.452053341s
	W0731 20:58:19.809664  187862 start.go:714] error starting host: provision: host is not running
	W0731 20:58:19.809893  187862 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0731 20:58:19.809916  187862 start.go:729] Will try again in 5 seconds ...
	I0731 20:58:19.831221  188133 main.go:141] libmachine: (no-preload-916885) Calling .Start
	I0731 20:58:19.831409  188133 main.go:141] libmachine: (no-preload-916885) Ensuring networks are active...
	I0731 20:58:19.832210  188133 main.go:141] libmachine: (no-preload-916885) Ensuring network default is active
	I0731 20:58:19.832536  188133 main.go:141] libmachine: (no-preload-916885) Ensuring network mk-no-preload-916885 is active
	I0731 20:58:19.832885  188133 main.go:141] libmachine: (no-preload-916885) Getting domain xml...
	I0731 20:58:19.833563  188133 main.go:141] libmachine: (no-preload-916885) Creating domain...
	I0731 20:58:21.031310  188133 main.go:141] libmachine: (no-preload-916885) Waiting to get IP...
	I0731 20:58:21.032067  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:21.032519  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:21.032626  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:21.032509  189287 retry.go:31] will retry after 207.547113ms: waiting for machine to come up
	I0731 20:58:21.242229  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:21.242716  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:21.242797  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:21.242683  189287 retry.go:31] will retry after 307.483232ms: waiting for machine to come up
	I0731 20:58:21.552437  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:21.552954  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:21.552977  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:21.552911  189287 retry.go:31] will retry after 441.063904ms: waiting for machine to come up
	I0731 20:58:21.995514  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:21.995860  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:21.995903  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:21.995813  189287 retry.go:31] will retry after 596.915537ms: waiting for machine to come up
	I0731 20:58:22.594563  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:22.595037  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:22.595079  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:22.594988  189287 retry.go:31] will retry after 471.207023ms: waiting for machine to come up
	I0731 20:58:23.067499  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:23.067926  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:23.067950  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:23.067899  189287 retry.go:31] will retry after 756.851428ms: waiting for machine to come up
	I0731 20:58:23.826869  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:23.827277  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:23.827305  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:23.827232  189287 retry.go:31] will retry after 981.303239ms: waiting for machine to come up
	I0731 20:58:24.810830  187862 start.go:360] acquireMachinesLock for embed-certs-831240: {Name:mk0ee20c9dba367bb5e62f6affdfe6f589095d2a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 20:58:24.810239  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:24.810615  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:24.810651  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:24.810584  189287 retry.go:31] will retry after 1.18169902s: waiting for machine to come up
	I0731 20:58:25.994320  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:25.994700  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:25.994728  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:25.994635  189287 retry.go:31] will retry after 1.781207961s: waiting for machine to come up
	I0731 20:58:27.778381  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:27.778764  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:27.778805  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:27.778734  189287 retry.go:31] will retry after 1.885603462s: waiting for machine to come up
	I0731 20:58:29.665633  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:29.666049  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:29.666070  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:29.666026  189287 retry.go:31] will retry after 2.664379174s: waiting for machine to come up
	I0731 20:58:32.333226  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:32.333615  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:32.333644  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:32.333594  189287 retry.go:31] will retry after 2.932420774s: waiting for machine to come up
	I0731 20:58:35.267165  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:35.267527  188133 main.go:141] libmachine: (no-preload-916885) DBG | unable to find current IP address of domain no-preload-916885 in network mk-no-preload-916885
	I0731 20:58:35.267558  188133 main.go:141] libmachine: (no-preload-916885) DBG | I0731 20:58:35.267496  189287 retry.go:31] will retry after 4.378841892s: waiting for machine to come up
	I0731 20:58:41.010483  188266 start.go:364] duration metric: took 4m25.11688001s to acquireMachinesLock for "default-k8s-diff-port-125614"
	I0731 20:58:41.010557  188266 start.go:96] Skipping create...Using existing machine configuration
	I0731 20:58:41.010566  188266 fix.go:54] fixHost starting: 
	I0731 20:58:41.010992  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:58:41.011033  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:58:41.030450  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41629
	I0731 20:58:41.030910  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:58:41.031360  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:58:41.031382  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:58:41.031703  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:58:41.031859  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:58:41.032020  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetState
	I0731 20:58:41.033653  188266 fix.go:112] recreateIfNeeded on default-k8s-diff-port-125614: state=Stopped err=<nil>
	I0731 20:58:41.033695  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	W0731 20:58:41.033872  188266 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 20:58:41.035898  188266 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-125614" ...
	I0731 20:58:39.650969  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.651458  188133 main.go:141] libmachine: (no-preload-916885) Found IP for machine: 192.168.72.239
	I0731 20:58:39.651475  188133 main.go:141] libmachine: (no-preload-916885) Reserving static IP address...
	I0731 20:58:39.651516  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has current primary IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.651957  188133 main.go:141] libmachine: (no-preload-916885) Reserved static IP address: 192.168.72.239
	I0731 20:58:39.651995  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "no-preload-916885", mac: "52:54:00:46:b1:6a", ip: "192.168.72.239"} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:39.652023  188133 main.go:141] libmachine: (no-preload-916885) Waiting for SSH to be available...
	I0731 20:58:39.652054  188133 main.go:141] libmachine: (no-preload-916885) DBG | skip adding static IP to network mk-no-preload-916885 - found existing host DHCP lease matching {name: "no-preload-916885", mac: "52:54:00:46:b1:6a", ip: "192.168.72.239"}
	I0731 20:58:39.652073  188133 main.go:141] libmachine: (no-preload-916885) DBG | Getting to WaitForSSH function...
	I0731 20:58:39.654095  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.654450  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:39.654479  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.654636  188133 main.go:141] libmachine: (no-preload-916885) DBG | Using SSH client type: external
	I0731 20:58:39.654659  188133 main.go:141] libmachine: (no-preload-916885) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa (-rw-------)
	I0731 20:58:39.654714  188133 main.go:141] libmachine: (no-preload-916885) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.239 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:58:39.654729  188133 main.go:141] libmachine: (no-preload-916885) DBG | About to run SSH command:
	I0731 20:58:39.654768  188133 main.go:141] libmachine: (no-preload-916885) DBG | exit 0
	I0731 20:58:39.781409  188133 main.go:141] libmachine: (no-preload-916885) DBG | SSH cmd err, output: <nil>: 
	I0731 20:58:39.781741  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetConfigRaw
	I0731 20:58:39.782349  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetIP
	I0731 20:58:39.784813  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.785234  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:39.785266  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.785643  188133 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/config.json ...
	I0731 20:58:39.785859  188133 machine.go:94] provisionDockerMachine start ...
	I0731 20:58:39.785879  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:39.786095  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:39.788573  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.788840  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:39.788868  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.789025  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:39.789203  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:39.789374  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:39.789495  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:39.789661  188133 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:39.789927  188133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.239 22 <nil> <nil>}
	I0731 20:58:39.789941  188133 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 20:58:39.901661  188133 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 20:58:39.901687  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetMachineName
	I0731 20:58:39.901920  188133 buildroot.go:166] provisioning hostname "no-preload-916885"
	I0731 20:58:39.901953  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetMachineName
	I0731 20:58:39.902142  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:39.904763  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.905159  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:39.905186  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:39.905347  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:39.905534  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:39.905698  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:39.905822  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:39.905977  188133 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:39.906137  188133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.239 22 <nil> <nil>}
	I0731 20:58:39.906155  188133 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-916885 && echo "no-preload-916885" | sudo tee /etc/hostname
	I0731 20:58:40.030955  188133 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-916885
	
	I0731 20:58:40.030979  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.033905  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.034254  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.034276  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.034487  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:40.034693  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.034868  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.035014  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:40.035197  188133 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:40.035373  188133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.239 22 <nil> <nil>}
	I0731 20:58:40.035392  188133 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-916885' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-916885/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-916885' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:58:40.154331  188133 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:58:40.154381  188133 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 20:58:40.154436  188133 buildroot.go:174] setting up certificates
	I0731 20:58:40.154452  188133 provision.go:84] configureAuth start
	I0731 20:58:40.154474  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetMachineName
	I0731 20:58:40.154813  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetIP
	I0731 20:58:40.157702  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.158053  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.158075  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.158218  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.160715  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.161030  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.161048  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.161186  188133 provision.go:143] copyHostCerts
	I0731 20:58:40.161258  188133 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 20:58:40.161267  188133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 20:58:40.161372  188133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 20:58:40.161477  188133 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 20:58:40.161487  188133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 20:58:40.161520  188133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 20:58:40.161590  188133 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 20:58:40.161606  188133 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 20:58:40.161639  188133 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 20:58:40.161700  188133 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.no-preload-916885 san=[127.0.0.1 192.168.72.239 localhost minikube no-preload-916885]
	I0731 20:58:40.341529  188133 provision.go:177] copyRemoteCerts
	I0731 20:58:40.341586  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:58:40.341612  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.344557  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.344851  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.344871  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.345080  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:40.345266  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.345432  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:40.345677  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 20:58:40.431395  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:58:40.455012  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0731 20:58:40.477721  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 20:58:40.500174  188133 provision.go:87] duration metric: took 345.705192ms to configureAuth
	I0731 20:58:40.500203  188133 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:58:40.500377  188133 config.go:182] Loaded profile config "no-preload-916885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 20:58:40.500462  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.503077  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.503438  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.503467  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.503586  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:40.503780  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.503947  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.504065  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:40.504245  188133 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:40.504467  188133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.239 22 <nil> <nil>}
	I0731 20:58:40.504489  188133 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:58:40.765409  188133 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:58:40.765448  188133 machine.go:97] duration metric: took 979.574417ms to provisionDockerMachine
	I0731 20:58:40.765460  188133 start.go:293] postStartSetup for "no-preload-916885" (driver="kvm2")
	I0731 20:58:40.765474  188133 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:58:40.765525  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:40.765895  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:58:40.765928  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.768314  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.768610  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.768657  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.768760  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:40.768926  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.769089  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:40.769199  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 20:58:40.855821  188133 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:58:40.860032  188133 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:58:40.860071  188133 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 20:58:40.860148  188133 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 20:58:40.860251  188133 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 20:58:40.860367  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:58:40.869291  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:58:40.892945  188133 start.go:296] duration metric: took 127.469545ms for postStartSetup
	I0731 20:58:40.892991  188133 fix.go:56] duration metric: took 21.083232755s for fixHost
	I0731 20:58:40.893019  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:40.895784  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.896166  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:40.896197  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:40.896316  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:40.896501  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.896654  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:40.896777  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:40.896964  188133 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:40.897133  188133 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.239 22 <nil> <nil>}
	I0731 20:58:40.897143  188133 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:58:41.010330  188133 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722459520.969906971
	
	I0731 20:58:41.010352  188133 fix.go:216] guest clock: 1722459520.969906971
	I0731 20:58:41.010360  188133 fix.go:229] Guest: 2024-07-31 20:58:40.969906971 +0000 UTC Remote: 2024-07-31 20:58:40.892995844 +0000 UTC m=+276.656012666 (delta=76.911127ms)
	I0731 20:58:41.010390  188133 fix.go:200] guest clock delta is within tolerance: 76.911127ms
	I0731 20:58:41.010396  188133 start.go:83] releasing machines lock for "no-preload-916885", held for 21.200662427s
	I0731 20:58:41.010429  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:41.010733  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetIP
	I0731 20:58:41.013519  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.013841  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:41.013867  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.014034  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:41.014637  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:41.014829  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 20:58:41.014914  188133 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:58:41.014974  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:41.015051  188133 ssh_runner.go:195] Run: cat /version.json
	I0731 20:58:41.015074  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 20:58:41.017813  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.017837  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.018170  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:41.018205  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:41.018225  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.018239  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:41.018482  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:41.018493  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 20:58:41.018678  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:41.018694  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 20:58:41.018862  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:41.018885  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 20:58:41.018965  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 20:58:41.019040  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 20:58:41.107999  188133 ssh_runner.go:195] Run: systemctl --version
	I0731 20:58:41.133039  188133 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:58:41.279485  188133 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:58:41.285765  188133 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:58:41.285838  188133 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:58:41.302175  188133 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:58:41.302203  188133 start.go:495] detecting cgroup driver to use...
	I0731 20:58:41.302280  188133 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:58:41.319896  188133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:58:41.334618  188133 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:58:41.334689  188133 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:58:41.348292  188133 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:58:41.363968  188133 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:58:41.472992  188133 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:58:41.605581  188133 docker.go:233] disabling docker service ...
	I0731 20:58:41.605669  188133 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:58:41.620414  188133 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:58:41.632951  188133 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:58:41.783942  188133 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:58:41.912311  188133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:58:41.931076  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:58:41.954672  188133 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0731 20:58:41.954752  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:41.967478  188133 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:58:41.967567  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:41.978990  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:41.991689  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:42.003168  188133 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:58:42.019114  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:42.034607  188133 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:42.057543  188133 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:58:42.070420  188133 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:58:42.081173  188133 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:58:42.081245  188133 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:58:42.095455  188133 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:58:42.106943  188133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:58:42.221724  188133 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:58:42.375966  188133 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:58:42.376051  188133 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:58:42.381473  188133 start.go:563] Will wait 60s for crictl version
	I0731 20:58:42.381548  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:42.385364  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:58:42.426783  188133 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:58:42.426872  188133 ssh_runner.go:195] Run: crio --version
	I0731 20:58:42.459096  188133 ssh_runner.go:195] Run: crio --version
	I0731 20:58:42.490043  188133 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0731 20:58:42.491578  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetIP
	I0731 20:58:42.494915  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:42.495289  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 20:58:42.495310  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 20:58:42.495610  188133 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0731 20:58:42.500266  188133 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:58:42.515164  188133 kubeadm.go:883] updating cluster {Name:no-preload-916885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-916885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.239 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:58:42.515295  188133 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 20:58:42.515332  188133 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:58:42.551930  188133 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0731 20:58:42.551961  188133 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 20:58:42.552025  188133 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:58:42.552047  188133 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0731 20:58:42.552067  188133 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 20:58:42.552087  188133 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 20:58:42.552071  188133 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 20:58:42.552028  188133 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 20:58:42.552129  188133 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 20:58:42.552035  188133 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0731 20:58:42.554026  188133 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 20:58:42.554044  188133 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 20:58:42.554103  188133 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0731 20:58:42.554112  188133 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0731 20:58:42.554123  188133 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:58:42.554030  188133 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 20:58:42.554032  188133 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 20:58:42.554027  188133 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 20:58:42.721659  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0731 20:58:42.743910  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0731 20:58:42.750941  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0731 20:58:42.772074  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 20:58:42.781921  188133 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0731 20:58:42.781964  188133 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 20:58:42.782014  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:42.793926  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 20:58:42.813112  188133 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0731 20:58:42.813154  188133 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0731 20:58:42.813202  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:42.916544  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 20:58:42.937647  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 20:58:42.948145  188133 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0731 20:58:42.948194  188133 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 20:58:42.948208  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0731 20:58:42.948237  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:42.948268  188133 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0731 20:58:42.948300  188133 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 20:58:42.948338  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0731 20:58:42.948341  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:43.006187  188133 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0731 20:58:43.006238  188133 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 20:58:43.006295  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:43.045484  188133 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0731 20:58:43.045541  188133 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 20:58:43.045585  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:43.045589  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 20:58:43.045643  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0731 20:58:43.045710  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0731 20:58:43.045730  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0731 20:58:43.045741  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 20:58:43.045780  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 20:58:43.045823  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0731 20:58:43.122382  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0731 20:58:43.122429  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0731 20:58:43.122449  188133 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0731 20:58:43.122489  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 20:58:43.122497  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0731 20:58:43.122513  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0731 20:58:43.122517  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0731 20:58:43.122588  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 20:58:43.122637  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 20:58:43.122643  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0731 20:58:43.122731  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 20:58:43.522969  188133 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:58:41.037393  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Start
	I0731 20:58:41.037575  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Ensuring networks are active...
	I0731 20:58:41.038366  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Ensuring network default is active
	I0731 20:58:41.038703  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Ensuring network mk-default-k8s-diff-port-125614 is active
	I0731 20:58:41.039402  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Getting domain xml...
	I0731 20:58:41.040218  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Creating domain...
	I0731 20:58:42.319123  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting to get IP...
	I0731 20:58:42.320314  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.320801  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.320908  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:42.320797  189429 retry.go:31] will retry after 274.801111ms: waiting for machine to come up
	I0731 20:58:42.597444  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.597866  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.597914  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:42.597842  189429 retry.go:31] will retry after 382.328248ms: waiting for machine to come up
	I0731 20:58:42.981533  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.982018  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:42.982051  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:42.981955  189429 retry.go:31] will retry after 426.247953ms: waiting for machine to come up
	I0731 20:58:43.409523  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:43.409839  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:43.409867  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:43.409795  189429 retry.go:31] will retry after 483.501118ms: waiting for machine to come up
	I0731 20:58:43.894451  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:43.894844  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:43.894874  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:43.894779  189429 retry.go:31] will retry after 759.968593ms: waiting for machine to come up
	I0731 20:58:44.656097  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:44.656551  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:44.656580  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:44.656503  189429 retry.go:31] will retry after 766.563008ms: waiting for machine to come up
	I0731 20:58:45.424382  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:45.424793  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:45.424831  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:45.424744  189429 retry.go:31] will retry after 1.172047019s: waiting for machine to come up
	I0731 20:58:45.107333  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.984807614s)
	I0731 20:58:45.107368  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0731 20:58:45.107393  188133 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0731 20:58:45.107452  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0731 20:58:45.107471  188133 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0: (1.98485492s)
	I0731 20:58:45.107523  188133 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.985012474s)
	I0731 20:58:45.107534  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0731 20:58:45.107560  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0731 20:58:45.107563  188133 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.984910291s)
	I0731 20:58:45.107585  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0731 20:58:45.107609  188133 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.984862504s)
	I0731 20:58:45.107619  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 20:58:45.107626  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0731 20:58:45.107668  188133 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.584674739s)
	I0731 20:58:45.107701  188133 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0731 20:58:45.107729  188133 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:58:45.107761  188133 ssh_runner.go:195] Run: which crictl
	I0731 20:58:48.706832  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.599347822s)
	I0731 20:58:48.706872  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0731 20:58:48.706886  188133 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (3.599247467s)
	I0731 20:58:48.706923  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0731 20:58:48.706898  188133 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 20:58:48.706925  188133 ssh_runner.go:235] Completed: which crictl: (3.599146318s)
	I0731 20:58:48.706979  188133 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:58:48.706980  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0731 20:58:48.747292  188133 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 20:58:48.747415  188133 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0731 20:58:46.598636  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:46.599086  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:46.599117  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:46.599033  189429 retry.go:31] will retry after 1.204122239s: waiting for machine to come up
	I0731 20:58:47.805441  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:47.805922  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:47.805953  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:47.805864  189429 retry.go:31] will retry after 1.466632244s: waiting for machine to come up
	I0731 20:58:49.274527  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:49.275004  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:49.275030  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:49.274961  189429 retry.go:31] will retry after 2.04848438s: waiting for machine to come up
	I0731 20:58:50.902082  188133 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.154633427s)
	I0731 20:58:50.902138  188133 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0731 20:58:50.902203  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.195118092s)
	I0731 20:58:50.902226  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0731 20:58:50.902259  188133 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 20:58:50.902320  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0731 20:58:52.863335  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.960989386s)
	I0731 20:58:52.863370  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0731 20:58:52.863394  188133 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 20:58:52.863434  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0731 20:58:51.324633  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:51.325056  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:51.325080  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:51.324983  189429 retry.go:31] will retry after 1.991151757s: waiting for machine to come up
	I0731 20:58:53.318784  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:53.319133  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:53.319164  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:53.319077  189429 retry.go:31] will retry after 2.631932264s: waiting for machine to come up
	I0731 20:58:54.629811  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.766355185s)
	I0731 20:58:54.629840  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0731 20:58:54.629882  188133 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 20:58:54.629954  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0731 20:58:55.983610  188133 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.353622135s)
	I0731 20:58:55.983655  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0731 20:58:55.983692  188133 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0731 20:58:55.983764  188133 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0731 20:58:56.828512  188133 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0731 20:58:56.828560  188133 cache_images.go:123] Successfully loaded all cached images
	I0731 20:58:56.828568  188133 cache_images.go:92] duration metric: took 14.276593942s to LoadCachedImages
	I0731 20:58:56.828583  188133 kubeadm.go:934] updating node { 192.168.72.239 8443 v1.31.0-beta.0 crio true true} ...
	I0731 20:58:56.828722  188133 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-916885 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.239
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-916885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:58:56.828806  188133 ssh_runner.go:195] Run: crio config
	I0731 20:58:56.877187  188133 cni.go:84] Creating CNI manager for ""
	I0731 20:58:56.877222  188133 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:58:56.877245  188133 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:58:56.877269  188133 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.239 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-916885 NodeName:no-preload-916885 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.239"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.239 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 20:58:56.877442  188133 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.239
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-916885"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.239
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.239"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:58:56.877526  188133 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0731 20:58:56.887721  188133 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:58:56.887796  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 20:58:56.896845  188133 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0731 20:58:56.912886  188133 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0731 20:58:56.928914  188133 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0731 20:58:56.945604  188133 ssh_runner.go:195] Run: grep 192.168.72.239	control-plane.minikube.internal$ /etc/hosts
	I0731 20:58:56.949538  188133 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.239	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:58:56.961490  188133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:58:57.075114  188133 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:58:57.091701  188133 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885 for IP: 192.168.72.239
	I0731 20:58:57.091724  188133 certs.go:194] generating shared ca certs ...
	I0731 20:58:57.091743  188133 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:58:57.091909  188133 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 20:58:57.091959  188133 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 20:58:57.091971  188133 certs.go:256] generating profile certs ...
	I0731 20:58:57.092062  188133 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/client.key
	I0731 20:58:57.092141  188133 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/apiserver.key.cc7e9c96
	I0731 20:58:57.092193  188133 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/proxy-client.key
	I0731 20:58:57.092330  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 20:58:57.092405  188133 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 20:58:57.092423  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 20:58:57.092458  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:58:57.092489  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:58:57.092520  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 20:58:57.092586  188133 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:58:57.093296  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:58:57.139431  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:58:57.169132  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:58:57.196541  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 20:58:57.232826  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 20:58:57.260875  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 20:58:57.290195  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:58:57.316645  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/no-preload-916885/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 20:58:57.339741  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:58:57.362406  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 20:58:57.385009  188133 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 20:58:57.407540  188133 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:58:57.423697  188133 ssh_runner.go:195] Run: openssl version
	I0731 20:58:57.429741  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:58:57.440545  188133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:58:57.444984  188133 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:58:57.445035  188133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:58:57.450651  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:58:57.460547  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 20:58:57.470575  188133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 20:58:57.474939  188133 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 20:58:57.474988  188133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 20:58:57.480481  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 20:58:57.490404  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 20:58:57.500433  188133 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 20:58:57.504785  188133 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 20:58:57.504835  188133 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 20:58:57.510165  188133 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:58:57.520019  188133 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:58:57.524596  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 20:58:57.530667  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 20:58:57.536315  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 20:58:57.542049  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 20:58:57.547594  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 20:58:57.553084  188133 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 20:58:57.558419  188133 kubeadm.go:392] StartCluster: {Name:no-preload-916885 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-916885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.239 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:58:57.558501  188133 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:58:57.558537  188133 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:58:57.600004  188133 cri.go:89] found id: ""
	I0731 20:58:57.600087  188133 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 20:58:57.609911  188133 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 20:58:57.609933  188133 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 20:58:57.609975  188133 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 20:58:57.619498  188133 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:58:57.621885  188133 kubeconfig.go:125] found "no-preload-916885" server: "https://192.168.72.239:8443"
	I0731 20:58:57.624838  188133 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 20:58:57.633984  188133 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.239
	I0731 20:58:57.634025  188133 kubeadm.go:1160] stopping kube-system containers ...
	I0731 20:58:57.634037  188133 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 20:58:57.634080  188133 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:58:57.672988  188133 cri.go:89] found id: ""
	I0731 20:58:57.673053  188133 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 20:58:57.689149  188133 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:58:57.698520  188133 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:58:57.698541  188133 kubeadm.go:157] found existing configuration files:
	
	I0731 20:58:57.698595  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 20:58:57.707106  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:58:57.707163  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:58:57.715878  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 20:58:57.724169  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:58:57.724219  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:58:57.732890  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 20:58:57.741121  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:58:57.741174  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:58:57.749776  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 20:58:57.758063  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:58:57.758114  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:58:57.766815  188133 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 20:58:57.775595  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:58:57.883689  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:58:58.740684  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:58:58.926231  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:58:58.987089  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:58:59.049782  188133 api_server.go:52] waiting for apiserver process to appear ...
	I0731 20:58:59.049862  188133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:00.418227  188656 start.go:364] duration metric: took 3m46.480116699s to acquireMachinesLock for "old-k8s-version-239115"
	I0731 20:59:00.418294  188656 start.go:96] Skipping create...Using existing machine configuration
	I0731 20:59:00.418302  188656 fix.go:54] fixHost starting: 
	I0731 20:59:00.418738  188656 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:00.418773  188656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:00.438533  188656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37127
	I0731 20:59:00.438963  188656 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:00.439499  188656 main.go:141] libmachine: Using API Version  1
	I0731 20:59:00.439524  188656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:00.439930  188656 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:00.441449  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:00.441651  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetState
	I0731 20:59:00.443465  188656 fix.go:112] recreateIfNeeded on old-k8s-version-239115: state=Stopped err=<nil>
	I0731 20:59:00.443505  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	W0731 20:59:00.443679  188656 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 20:59:00.445840  188656 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-239115" ...
	I0731 20:58:55.953940  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:55.954393  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | unable to find current IP address of domain default-k8s-diff-port-125614 in network mk-default-k8s-diff-port-125614
	I0731 20:58:55.954422  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | I0731 20:58:55.954356  189429 retry.go:31] will retry after 3.068212527s: waiting for machine to come up
	I0731 20:58:59.025966  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.026388  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has current primary IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.026406  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Found IP for machine: 192.168.50.221
	I0731 20:58:59.026417  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Reserving static IP address...
	I0731 20:58:59.026867  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Reserved static IP address: 192.168.50.221
	I0731 20:58:59.026918  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-125614", mac: "52:54:00:c8:c7:f0", ip: "192.168.50.221"} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.026933  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Waiting for SSH to be available...
	I0731 20:58:59.026954  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | skip adding static IP to network mk-default-k8s-diff-port-125614 - found existing host DHCP lease matching {name: "default-k8s-diff-port-125614", mac: "52:54:00:c8:c7:f0", ip: "192.168.50.221"}
	I0731 20:58:59.026972  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Getting to WaitForSSH function...
	I0731 20:58:59.029330  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.029685  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.029720  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.029820  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Using SSH client type: external
	I0731 20:58:59.029846  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa (-rw-------)
	I0731 20:58:59.029877  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.221 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:58:59.029894  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | About to run SSH command:
	I0731 20:58:59.029906  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | exit 0
	I0731 20:58:59.161209  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | SSH cmd err, output: <nil>: 
	I0731 20:58:59.161713  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetConfigRaw
	I0731 20:58:59.162331  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetIP
	I0731 20:58:59.164645  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.164953  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.164986  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.165269  188266 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/config.json ...
	I0731 20:58:59.165479  188266 machine.go:94] provisionDockerMachine start ...
	I0731 20:58:59.165503  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:58:59.165692  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.167796  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.168065  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.168110  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.168247  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:58:59.168408  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.168626  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.168763  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:58:59.168901  188266 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:59.169103  188266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0731 20:58:59.169115  188266 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 20:58:59.281875  188266 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 20:58:59.281901  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetMachineName
	I0731 20:58:59.282185  188266 buildroot.go:166] provisioning hostname "default-k8s-diff-port-125614"
	I0731 20:58:59.282218  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetMachineName
	I0731 20:58:59.282460  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.284970  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.285461  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.285498  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.285612  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:58:59.285814  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.286004  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.286139  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:58:59.286278  188266 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:59.286445  188266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0731 20:58:59.286460  188266 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-125614 && echo "default-k8s-diff-port-125614" | sudo tee /etc/hostname
	I0731 20:58:59.411873  188266 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-125614
	
	I0731 20:58:59.411904  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.414733  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.415065  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.415099  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.415274  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:58:59.415463  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.415604  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.415751  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:58:59.415898  188266 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:59.416074  188266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0731 20:58:59.416090  188266 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-125614' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-125614/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-125614' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:58:59.539168  188266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:58:59.539210  188266 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 20:58:59.539247  188266 buildroot.go:174] setting up certificates
	I0731 20:58:59.539256  188266 provision.go:84] configureAuth start
	I0731 20:58:59.539267  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetMachineName
	I0731 20:58:59.539595  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetIP
	I0731 20:58:59.542447  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.542887  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.542916  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.543103  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.545597  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.545972  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.545992  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.546128  188266 provision.go:143] copyHostCerts
	I0731 20:58:59.546195  188266 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 20:58:59.546206  188266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 20:58:59.546265  188266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 20:58:59.546366  188266 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 20:58:59.546377  188266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 20:58:59.546407  188266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 20:58:59.546488  188266 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 20:58:59.546492  188266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 20:58:59.546517  188266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 20:58:59.546565  188266 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-125614 san=[127.0.0.1 192.168.50.221 default-k8s-diff-port-125614 localhost minikube]
	I0731 20:58:59.690753  188266 provision.go:177] copyRemoteCerts
	I0731 20:58:59.690811  188266 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:58:59.690839  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.693800  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.694141  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.694175  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.694383  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:58:59.694583  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.694748  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:58:59.694884  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:58:59.783710  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:58:59.814512  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0731 20:58:59.843492  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 20:58:59.867793  188266 provision.go:87] duration metric: took 328.521723ms to configureAuth
	I0731 20:58:59.867840  188266 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:58:59.868013  188266 config.go:182] Loaded profile config "default-k8s-diff-port-125614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:58:59.868089  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:58:59.871214  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.871655  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:58:59.871684  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:58:59.871875  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:58:59.872127  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.872309  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:58:59.872503  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:58:59.872687  188266 main.go:141] libmachine: Using SSH client type: native
	I0731 20:58:59.872909  188266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0731 20:58:59.872935  188266 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:59:00.165458  188266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:59:00.165492  188266 machine.go:97] duration metric: took 999.996831ms to provisionDockerMachine
	I0731 20:59:00.165509  188266 start.go:293] postStartSetup for "default-k8s-diff-port-125614" (driver="kvm2")
	I0731 20:59:00.165527  188266 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:59:00.165549  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:00.165936  188266 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:59:00.165973  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:00.168477  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.168837  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:00.168864  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.168991  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:00.169203  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:00.169387  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:00.169575  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:00.262132  188266 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:59:00.266596  188266 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:59:00.266621  188266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 20:59:00.266695  188266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 20:59:00.266789  188266 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 20:59:00.266909  188266 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:59:00.276407  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:00.300017  188266 start.go:296] duration metric: took 134.490488ms for postStartSetup
	I0731 20:59:00.300061  188266 fix.go:56] duration metric: took 19.289494966s for fixHost
	I0731 20:59:00.300087  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:00.302714  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.303073  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:00.303106  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.303249  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:00.303448  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:00.303633  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:00.303786  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:00.303978  188266 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:00.304204  188266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0731 20:59:00.304217  188266 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:59:00.418073  188266 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722459540.389901096
	
	I0731 20:59:00.418096  188266 fix.go:216] guest clock: 1722459540.389901096
	I0731 20:59:00.418105  188266 fix.go:229] Guest: 2024-07-31 20:59:00.389901096 +0000 UTC Remote: 2024-07-31 20:59:00.30006642 +0000 UTC m=+284.542031804 (delta=89.834676ms)
	I0731 20:59:00.418130  188266 fix.go:200] guest clock delta is within tolerance: 89.834676ms
	I0731 20:59:00.418138  188266 start.go:83] releasing machines lock for "default-k8s-diff-port-125614", held for 19.407605953s
	I0731 20:59:00.418167  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:00.418669  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetIP
	I0731 20:59:00.421683  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.422050  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:00.422090  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.422234  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:00.422799  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:00.422999  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:00.423061  188266 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:59:00.423119  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:00.423354  188266 ssh_runner.go:195] Run: cat /version.json
	I0731 20:59:00.423378  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:00.426188  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.426362  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.426603  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:00.426631  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.426790  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:00.426882  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:00.426929  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:00.427019  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:00.427197  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:00.427208  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:00.427363  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:00.427380  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:00.427523  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:00.427668  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:00.511834  188266 ssh_runner.go:195] Run: systemctl --version
	I0731 20:59:00.536649  188266 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:59:00.692463  188266 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:59:00.700344  188266 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:59:00.700413  188266 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:59:00.721837  188266 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:59:00.721863  188266 start.go:495] detecting cgroup driver to use...
	I0731 20:59:00.721940  188266 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:59:00.742477  188266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:59:00.760049  188266 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:59:00.760120  188266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:59:00.777823  188266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:59:00.791680  188266 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:59:00.908094  188266 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:59:01.051284  188266 docker.go:233] disabling docker service ...
	I0731 20:59:01.051379  188266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:59:01.070927  188266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:59:01.083393  188266 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:59:01.223186  188266 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:59:01.355265  188266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:59:01.369810  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:59:01.390523  188266 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 20:59:01.390588  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.401241  188266 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:59:01.401308  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.412049  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.422145  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.432523  188266 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:59:01.442640  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.456933  188266 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.475628  188266 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:01.486226  188266 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:59:01.496757  188266 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:59:01.496813  188266 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:59:01.510264  188266 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:59:01.520231  188266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:01.636451  188266 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:59:01.784134  188266 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:59:01.784220  188266 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:59:01.788836  188266 start.go:563] Will wait 60s for crictl version
	I0731 20:59:01.788895  188266 ssh_runner.go:195] Run: which crictl
	I0731 20:59:01.793059  188266 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:59:01.840110  188266 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:59:01.840200  188266 ssh_runner.go:195] Run: crio --version
	I0731 20:59:01.868816  188266 ssh_runner.go:195] Run: crio --version
	I0731 20:59:01.908539  188266 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 20:59:00.447208  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .Start
	I0731 20:59:00.447389  188656 main.go:141] libmachine: (old-k8s-version-239115) Ensuring networks are active...
	I0731 20:59:00.448116  188656 main.go:141] libmachine: (old-k8s-version-239115) Ensuring network default is active
	I0731 20:59:00.448589  188656 main.go:141] libmachine: (old-k8s-version-239115) Ensuring network mk-old-k8s-version-239115 is active
	I0731 20:59:00.448892  188656 main.go:141] libmachine: (old-k8s-version-239115) Getting domain xml...
	I0731 20:59:00.450110  188656 main.go:141] libmachine: (old-k8s-version-239115) Creating domain...
	I0731 20:59:01.823554  188656 main.go:141] libmachine: (old-k8s-version-239115) Waiting to get IP...
	I0731 20:59:01.824648  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:01.825111  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:01.825172  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:01.825080  189574 retry.go:31] will retry after 241.700507ms: waiting for machine to come up
	I0731 20:59:02.068913  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:02.069608  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:02.069738  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:02.069663  189574 retry.go:31] will retry after 258.921821ms: waiting for machine to come up
	I0731 20:59:02.330231  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:02.330846  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:02.330877  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:02.330776  189574 retry.go:31] will retry after 460.911793ms: waiting for machine to come up
	I0731 20:59:02.793718  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:02.794177  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:02.794206  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:02.794156  189574 retry.go:31] will retry after 380.241989ms: waiting for machine to come up
	I0731 20:59:03.175918  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:03.176761  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:03.176786  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:03.176670  189574 retry.go:31] will retry after 631.876736ms: waiting for machine to come up
	I0731 20:59:03.810803  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:03.811478  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:03.811503  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:03.811366  189574 retry.go:31] will retry after 583.328017ms: waiting for machine to come up
	I0731 20:58:59.550347  188133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:00.050077  188133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:00.066942  188133 api_server.go:72] duration metric: took 1.017157745s to wait for apiserver process to appear ...
	I0731 20:59:00.066991  188133 api_server.go:88] waiting for apiserver healthz status ...
	I0731 20:59:00.067016  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:00.067685  188133 api_server.go:269] stopped: https://192.168.72.239:8443/healthz: Get "https://192.168.72.239:8443/healthz": dial tcp 192.168.72.239:8443: connect: connection refused
	I0731 20:59:00.567237  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:03.555694  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:03.555739  188133 api_server.go:103] status: https://192.168.72.239:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:03.555756  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:03.606602  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:03.606641  188133 api_server.go:103] status: https://192.168.72.239:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:03.606657  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:03.617900  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:03.617935  188133 api_server.go:103] status: https://192.168.72.239:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:04.067724  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:04.073838  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:04.073875  188133 api_server.go:103] status: https://192.168.72.239:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:04.568116  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:04.575013  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:04.575044  188133 api_server.go:103] status: https://192.168.72.239:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:05.067154  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 20:59:05.073314  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 200:
	ok
	I0731 20:59:05.083559  188133 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 20:59:05.083595  188133 api_server.go:131] duration metric: took 5.016595337s to wait for apiserver health ...
	I0731 20:59:05.083606  188133 cni.go:84] Creating CNI manager for ""
	I0731 20:59:05.083614  188133 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:05.085564  188133 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 20:59:01.910091  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetIP
	I0731 20:59:01.913322  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:01.913714  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:01.913747  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:01.914046  188266 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0731 20:59:01.918504  188266 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:01.930599  188266 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-125614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-125614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:59:01.930756  188266 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:59:01.930826  188266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:01.968796  188266 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 20:59:01.968882  188266 ssh_runner.go:195] Run: which lz4
	I0731 20:59:01.974123  188266 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 20:59:01.979542  188266 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 20:59:01.979575  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 20:59:03.529579  188266 crio.go:462] duration metric: took 1.555502358s to copy over tarball
	I0731 20:59:03.529662  188266 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 20:59:04.395886  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:04.396400  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:04.396664  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:04.396347  189574 retry.go:31] will retry after 1.154504022s: waiting for machine to come up
	I0731 20:59:05.552240  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:05.552879  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:05.552901  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:05.552831  189574 retry.go:31] will retry after 1.037365333s: waiting for machine to come up
	I0731 20:59:06.591875  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:06.592416  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:06.592450  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:06.592329  189574 retry.go:31] will retry after 1.249444079s: waiting for machine to come up
	I0731 20:59:07.843058  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:07.843436  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:07.843463  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:07.843370  189574 retry.go:31] will retry after 1.700521776s: waiting for machine to come up
	I0731 20:59:05.087080  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 20:59:05.105303  188133 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 20:59:05.125019  188133 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 20:59:05.136768  188133 system_pods.go:59] 8 kube-system pods found
	I0731 20:59:05.136823  188133 system_pods.go:61] "coredns-5cfdc65f69-c9gcf" [3b9458d3-81d0-4138-8a6a-81f087c3ed02] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 20:59:05.136836  188133 system_pods.go:61] "etcd-no-preload-916885" [aa31006d-8e74-48c2-9b5d-5604b3a1c283] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 20:59:05.136847  188133 system_pods.go:61] "kube-apiserver-no-preload-916885" [64549ba0-8e30-41d3-82eb-cdb729623a9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 20:59:05.136856  188133 system_pods.go:61] "kube-controller-manager-no-preload-916885" [2620c741-c27a-4df5-8555-58767d43c675] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 20:59:05.136866  188133 system_pods.go:61] "kube-proxy-99jgm" [0060c1a0-badc-401c-a4dc-5cfaa958654e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 20:59:05.136880  188133 system_pods.go:61] "kube-scheduler-no-preload-916885" [f02a0a1d-5cbb-4ee3-a084-21710667565e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 20:59:05.136894  188133 system_pods.go:61] "metrics-server-78fcd8795b-jrzgg" [acbe48be-32e9-44f8-9bf2-52e0e92a09e0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 20:59:05.136912  188133 system_pods.go:61] "storage-provisioner" [d0f902cd-d1db-4c70-bdad-34bda917cec1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 20:59:05.136926  188133 system_pods.go:74] duration metric: took 11.882384ms to wait for pod list to return data ...
	I0731 20:59:05.136937  188133 node_conditions.go:102] verifying NodePressure condition ...
	I0731 20:59:05.142117  188133 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 20:59:05.142149  188133 node_conditions.go:123] node cpu capacity is 2
	I0731 20:59:05.142165  188133 node_conditions.go:105] duration metric: took 5.221098ms to run NodePressure ...
	I0731 20:59:05.142187  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:05.534597  188133 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 20:59:05.539583  188133 kubeadm.go:739] kubelet initialised
	I0731 20:59:05.539604  188133 kubeadm.go:740] duration metric: took 4.980297ms waiting for restarted kubelet to initialise ...
	I0731 20:59:05.539626  188133 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:59:05.544498  188133 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-c9gcf" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:07.778624  188133 pod_ready.go:102] pod "coredns-5cfdc65f69-c9gcf" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:06.024682  188266 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.494984583s)
	I0731 20:59:06.024718  188266 crio.go:469] duration metric: took 2.495107603s to extract the tarball
	I0731 20:59:06.024729  188266 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 20:59:06.062675  188266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:06.107619  188266 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 20:59:06.107649  188266 cache_images.go:84] Images are preloaded, skipping loading
	I0731 20:59:06.107667  188266 kubeadm.go:934] updating node { 192.168.50.221 8444 v1.30.3 crio true true} ...
	I0731 20:59:06.107792  188266 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-125614 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-125614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:59:06.107863  188266 ssh_runner.go:195] Run: crio config
	I0731 20:59:06.173983  188266 cni.go:84] Creating CNI manager for ""
	I0731 20:59:06.174007  188266 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:06.174019  188266 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:59:06.174040  188266 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.221 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-125614 NodeName:default-k8s-diff-port-125614 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 20:59:06.174168  188266 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.221
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-125614"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.221
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.221"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:59:06.174233  188266 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 20:59:06.185059  188266 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:59:06.185189  188266 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 20:59:06.196571  188266 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0731 20:59:06.218964  188266 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:59:06.239033  188266 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0731 20:59:06.260519  188266 ssh_runner.go:195] Run: grep 192.168.50.221	control-plane.minikube.internal$ /etc/hosts
	I0731 20:59:06.264718  188266 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.221	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:06.278173  188266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:06.423941  188266 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:59:06.441663  188266 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614 for IP: 192.168.50.221
	I0731 20:59:06.441689  188266 certs.go:194] generating shared ca certs ...
	I0731 20:59:06.441711  188266 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:06.441906  188266 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 20:59:06.441965  188266 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 20:59:06.441978  188266 certs.go:256] generating profile certs ...
	I0731 20:59:06.442080  188266 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/client.key
	I0731 20:59:06.442157  188266 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/apiserver.key.9cb12361
	I0731 20:59:06.442205  188266 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/proxy-client.key
	I0731 20:59:06.442354  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 20:59:06.442391  188266 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 20:59:06.442404  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 20:59:06.442447  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:59:06.442478  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:59:06.442522  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 20:59:06.442580  188266 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:06.443470  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:59:06.497056  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:59:06.530978  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:59:06.574533  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 20:59:06.619523  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0731 20:59:06.648269  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 20:59:06.677824  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:59:06.704450  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/default-k8s-diff-port-125614/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 20:59:06.731606  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 20:59:06.756990  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 20:59:06.781214  188266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:59:06.804855  188266 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:59:06.821531  188266 ssh_runner.go:195] Run: openssl version
	I0731 20:59:06.827394  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 20:59:06.838680  188266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 20:59:06.843618  188266 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 20:59:06.843681  188266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 20:59:06.850238  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:59:06.865533  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:59:06.881516  188266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:06.886809  188266 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:06.886876  188266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:06.893345  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:59:06.908919  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 20:59:06.922150  188266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 20:59:06.927165  188266 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 20:59:06.927226  188266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 20:59:06.933724  188266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 20:59:06.946420  188266 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:59:06.951347  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 20:59:06.959595  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 20:59:06.967808  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 20:59:06.977083  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 20:59:06.985089  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 20:59:06.992190  188266 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 20:59:06.998458  188266 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-125614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-125614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:59:06.998548  188266 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:59:06.998592  188266 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:07.053176  188266 cri.go:89] found id: ""
	I0731 20:59:07.053256  188266 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 20:59:07.064373  188266 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 20:59:07.064392  188266 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 20:59:07.064433  188266 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 20:59:07.075167  188266 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:59:07.076057  188266 kubeconfig.go:125] found "default-k8s-diff-port-125614" server: "https://192.168.50.221:8444"
	I0731 20:59:07.078091  188266 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 20:59:07.089136  188266 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.221
	I0731 20:59:07.089161  188266 kubeadm.go:1160] stopping kube-system containers ...
	I0731 20:59:07.089174  188266 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 20:59:07.089225  188266 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:07.133015  188266 cri.go:89] found id: ""
	I0731 20:59:07.133099  188266 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 20:59:07.155229  188266 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:59:07.166326  188266 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:59:07.166348  188266 kubeadm.go:157] found existing configuration files:
	
	I0731 20:59:07.166418  188266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0731 20:59:07.176709  188266 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:59:07.176768  188266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:59:07.187232  188266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0731 20:59:07.197376  188266 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:59:07.197453  188266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:59:07.209451  188266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0731 20:59:07.221141  188266 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:59:07.221205  188266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:59:07.232016  188266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0731 20:59:07.242340  188266 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:59:07.242402  188266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:59:07.253794  188266 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 20:59:07.264912  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:07.382193  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:08.445321  188266 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.063086935s)
	I0731 20:59:08.445364  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:08.664603  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:08.744053  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:08.857284  188266 api_server.go:52] waiting for apiserver process to appear ...
	I0731 20:59:08.857380  188266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:09.357505  188266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:09.857488  188266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:09.887329  188266 api_server.go:72] duration metric: took 1.030046485s to wait for apiserver process to appear ...
	I0731 20:59:09.887358  188266 api_server.go:88] waiting for apiserver healthz status ...
	I0731 20:59:09.887405  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:09.887966  188266 api_server.go:269] stopped: https://192.168.50.221:8444/healthz: Get "https://192.168.50.221:8444/healthz": dial tcp 192.168.50.221:8444: connect: connection refused
	I0731 20:59:10.387674  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:09.545937  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:09.546581  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:09.546605  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:09.546529  189574 retry.go:31] will retry after 1.934269586s: waiting for machine to come up
	I0731 20:59:11.482402  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:11.482794  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:11.482823  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:11.482744  189574 retry.go:31] will retry after 2.575131422s: waiting for machine to come up
	I0731 20:59:10.053236  188133 pod_ready.go:102] pod "coredns-5cfdc65f69-c9gcf" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:10.551437  188133 pod_ready.go:92] pod "coredns-5cfdc65f69-c9gcf" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:10.551467  188133 pod_ready.go:81] duration metric: took 5.006944467s for pod "coredns-5cfdc65f69-c9gcf" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:10.551480  188133 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:12.559346  188133 pod_ready.go:102] pod "etcd-no-preload-916885" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:12.827297  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:12.827342  188266 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:12.827390  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:12.883496  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:12.883538  188266 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:12.887715  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:12.902715  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:12.902746  188266 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:13.388340  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:13.392840  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:13.392872  188266 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:13.888510  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:13.894519  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:13.894553  188266 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:14.388177  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 20:59:14.392557  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 200:
	ok
	I0731 20:59:14.399285  188266 api_server.go:141] control plane version: v1.30.3
	I0731 20:59:14.399321  188266 api_server.go:131] duration metric: took 4.511955505s to wait for apiserver health ...
	I0731 20:59:14.399333  188266 cni.go:84] Creating CNI manager for ""
	I0731 20:59:14.399340  188266 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:14.400987  188266 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 20:59:14.401981  188266 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 20:59:14.420648  188266 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 20:59:14.441909  188266 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 20:59:14.451365  188266 system_pods.go:59] 8 kube-system pods found
	I0731 20:59:14.451406  188266 system_pods.go:61] "coredns-7db6d8ff4d-gnrgs" [203ddf96-11cf-4fd3-8920-aa787815ad1a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 20:59:14.451419  188266 system_pods.go:61] "etcd-default-k8s-diff-port-125614" [7b9a74f5-b8df-457d-b4be-26e3f268e74a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 20:59:14.451426  188266 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-125614" [a2db9a6a-1d7f-4bb6-9f54-2d74676bba7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 20:59:14.451432  188266 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-125614" [3cd52038-e450-46ea-91a7-494bdfeb386e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 20:59:14.451438  188266 system_pods.go:61] "kube-proxy-csdc4" [24077c7d-f54c-4a54-9791-742327f2a9d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 20:59:14.451444  188266 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-125614" [4b9b4f87-74a3-4768-b3ca-3226a4db7105] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 20:59:14.451461  188266 system_pods.go:61] "metrics-server-569cc877fc-jf52w" [00b07830-8180-43c0-83c7-e68d399ae0ef] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 20:59:14.451468  188266 system_pods.go:61] "storage-provisioner" [efc60c19-af1b-426e-82e2-5fb9a2d1fb3a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 20:59:14.451476  188266 system_pods.go:74] duration metric: took 9.546534ms to wait for pod list to return data ...
	I0731 20:59:14.451486  188266 node_conditions.go:102] verifying NodePressure condition ...
	I0731 20:59:14.454760  188266 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 20:59:14.454784  188266 node_conditions.go:123] node cpu capacity is 2
	I0731 20:59:14.454795  188266 node_conditions.go:105] duration metric: took 3.303087ms to run NodePressure ...
	I0731 20:59:14.454820  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:14.730635  188266 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 20:59:14.735144  188266 kubeadm.go:739] kubelet initialised
	I0731 20:59:14.735165  188266 kubeadm.go:740] duration metric: took 4.500388ms waiting for restarted kubelet to initialise ...
	I0731 20:59:14.735173  188266 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:59:14.742292  188266 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:14.749460  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.749486  188266 pod_ready.go:81] duration metric: took 7.166399ms for pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:14.749496  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.749504  188266 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:14.757068  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.757091  188266 pod_ready.go:81] duration metric: took 7.579526ms for pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:14.757101  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.757109  188266 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:14.762181  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.762203  188266 pod_ready.go:81] duration metric: took 5.083756ms for pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:14.762213  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.762219  188266 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:14.845070  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.845095  188266 pod_ready.go:81] duration metric: took 82.86894ms for pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:14.845107  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:14.845113  188266 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-csdc4" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:15.246100  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "kube-proxy-csdc4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:15.246131  188266 pod_ready.go:81] duration metric: took 401.011321ms for pod "kube-proxy-csdc4" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:15.246150  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "kube-proxy-csdc4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:15.246159  188266 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:15.645657  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:15.645689  188266 pod_ready.go:81] duration metric: took 399.519543ms for pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:15.645704  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:15.645713  188266 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:16.045744  188266 pod_ready.go:97] node "default-k8s-diff-port-125614" hosting pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:16.045776  188266 pod_ready.go:81] duration metric: took 400.053102ms for pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:16.045791  188266 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-125614" hosting pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:16.045800  188266 pod_ready.go:38] duration metric: took 1.310615323s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:59:16.045838  188266 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 20:59:16.059046  188266 ops.go:34] apiserver oom_adj: -16
	I0731 20:59:16.059071  188266 kubeadm.go:597] duration metric: took 8.994671774s to restartPrimaryControlPlane
	I0731 20:59:16.059082  188266 kubeadm.go:394] duration metric: took 9.060633072s to StartCluster
	I0731 20:59:16.059104  188266 settings.go:142] acquiring lock: {Name:mk1f43113b3e416d694056e4890ac819b020378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:16.059181  188266 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 20:59:16.060895  188266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:16.061143  188266 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 20:59:16.061226  188266 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 20:59:16.061324  188266 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-125614"
	I0731 20:59:16.061386  188266 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-125614"
	W0731 20:59:16.061399  188266 addons.go:243] addon storage-provisioner should already be in state true
	I0731 20:59:16.061388  188266 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-125614"
	I0731 20:59:16.061400  188266 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-125614"
	I0731 20:59:16.061453  188266 config.go:182] Loaded profile config "default-k8s-diff-port-125614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:59:16.061495  188266 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-125614"
	W0731 20:59:16.061516  188266 addons.go:243] addon metrics-server should already be in state true
	I0731 20:59:16.061438  188266 host.go:66] Checking if "default-k8s-diff-port-125614" exists ...
	I0731 20:59:16.061603  188266 host.go:66] Checking if "default-k8s-diff-port-125614" exists ...
	I0731 20:59:16.061436  188266 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-125614"
	I0731 20:59:16.062072  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.062084  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.062085  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.062110  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.062127  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.062188  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.062822  188266 out.go:177] * Verifying Kubernetes components...
	I0731 20:59:16.064337  188266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:16.081194  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45591
	I0731 20:59:16.081208  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35871
	I0731 20:59:16.081197  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38383
	I0731 20:59:16.081872  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.081956  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.082026  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.082423  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.082439  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.082926  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.082951  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.083047  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.083058  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.083076  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.083712  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.083754  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.084871  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.085484  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.085734  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetState
	I0731 20:59:16.085815  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.085845  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.089827  188266 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-125614"
	W0731 20:59:16.089854  188266 addons.go:243] addon default-storageclass should already be in state true
	I0731 20:59:16.089884  188266 host.go:66] Checking if "default-k8s-diff-port-125614" exists ...
	I0731 20:59:16.090245  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.090301  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.106592  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38845
	I0731 20:59:16.106609  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34433
	I0731 20:59:16.108751  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.108849  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.109414  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.109442  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.109546  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.109576  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.109948  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.109953  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.110132  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetState
	I0731 20:59:16.110163  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetState
	I0731 20:59:16.111216  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36309
	I0731 20:59:16.111657  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.112217  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.112239  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.112319  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:16.113374  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.115608  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:16.115649  188266 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:59:16.115940  188266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:16.115979  188266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:16.116965  188266 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 20:59:16.117053  188266 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 20:59:16.117069  188266 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 20:59:16.117083  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:16.118247  188266 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 20:59:16.118268  188266 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 20:59:16.118288  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:16.120985  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.121540  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:16.121563  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.121764  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:16.121865  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.122099  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:16.122295  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:16.122371  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:16.122490  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.122552  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:16.122632  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:16.122850  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:16.123024  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:16.123218  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:16.133929  188266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34157
	I0731 20:59:16.134348  188266 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:16.134844  188266 main.go:141] libmachine: Using API Version  1
	I0731 20:59:16.134865  188266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:16.135175  188266 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:16.135389  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetState
	I0731 20:59:16.136985  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .DriverName
	I0731 20:59:16.137272  188266 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 20:59:16.137287  188266 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 20:59:16.137313  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHHostname
	I0731 20:59:16.140222  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.140543  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:c7:f0", ip: ""} in network mk-default-k8s-diff-port-125614: {Iface:virbr2 ExpiryTime:2024-07-31 21:50:35 +0000 UTC Type:0 Mac:52:54:00:c8:c7:f0 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-125614 Clientid:01:52:54:00:c8:c7:f0}
	I0731 20:59:16.140560  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHPort
	I0731 20:59:16.140762  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | domain default-k8s-diff-port-125614 has defined IP address 192.168.50.221 and MAC address 52:54:00:c8:c7:f0 in network mk-default-k8s-diff-port-125614
	I0731 20:59:16.140795  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHKeyPath
	I0731 20:59:16.140969  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .GetSSHUsername
	I0731 20:59:16.141107  188266 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/default-k8s-diff-port-125614/id_rsa Username:docker}
	I0731 20:59:16.257677  188266 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:59:16.275791  188266 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-125614" to be "Ready" ...
	I0731 20:59:16.373528  188266 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 20:59:16.373552  188266 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 20:59:16.380797  188266 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 20:59:16.404028  188266 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 20:59:16.406072  188266 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 20:59:16.406098  188266 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 20:59:16.456003  188266 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 20:59:16.456030  188266 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 20:59:16.517304  188266 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 20:59:17.377438  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.377468  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.377514  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.377565  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.377765  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.377780  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.377790  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.377797  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.377827  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.377835  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.377930  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.378028  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.378028  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Closing plugin on server side
	I0731 20:59:17.378354  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Closing plugin on server side
	I0731 20:59:17.378417  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.378424  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.378569  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.378583  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.384110  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.384130  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.384325  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.384341  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.428457  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.428480  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.428766  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.428782  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.428790  188266 main.go:141] libmachine: Making call to close driver server
	I0731 20:59:17.428799  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) Calling .Close
	I0731 20:59:17.428804  188266 main.go:141] libmachine: (default-k8s-diff-port-125614) DBG | Closing plugin on server side
	I0731 20:59:17.429011  188266 main.go:141] libmachine: Successfully made call to close driver server
	I0731 20:59:17.429024  188266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 20:59:17.429040  188266 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-125614"
	I0731 20:59:17.431884  188266 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 20:59:14.059385  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:14.059857  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:14.059879  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:14.059819  189574 retry.go:31] will retry after 3.127857327s: waiting for machine to come up
	I0731 20:59:17.189405  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:17.189871  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | unable to find current IP address of domain old-k8s-version-239115 in network mk-old-k8s-version-239115
	I0731 20:59:17.189902  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | I0731 20:59:17.189821  189574 retry.go:31] will retry after 4.516767425s: waiting for machine to come up
	I0731 20:59:14.559493  188133 pod_ready.go:102] pod "etcd-no-preload-916885" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:16.561540  188133 pod_ready.go:92] pod "etcd-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:16.561568  188133 pod_ready.go:81] duration metric: took 6.010079286s for pod "etcd-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:16.561580  188133 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.068734  188133 pod_ready.go:92] pod "kube-apiserver-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:18.068756  188133 pod_ready.go:81] duration metric: took 1.507167128s for pod "kube-apiserver-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.068766  188133 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.073069  188133 pod_ready.go:92] pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:18.073086  188133 pod_ready.go:81] duration metric: took 4.313817ms for pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.073095  188133 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-99jgm" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.077480  188133 pod_ready.go:92] pod "kube-proxy-99jgm" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:18.077497  188133 pod_ready.go:81] duration metric: took 4.395483ms for pod "kube-proxy-99jgm" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.077506  188133 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.082197  188133 pod_ready.go:92] pod "kube-scheduler-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:18.082221  188133 pod_ready.go:81] duration metric: took 4.709042ms for pod "kube-scheduler-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:18.082234  188133 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:17.433072  188266 addons.go:510] duration metric: took 1.371850333s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 20:59:18.280135  188266 node_ready.go:53] node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:20.280881  188266 node_ready.go:53] node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:23.082812  187862 start.go:364] duration metric: took 58.27194035s to acquireMachinesLock for "embed-certs-831240"
	I0731 20:59:23.082866  187862 start.go:96] Skipping create...Using existing machine configuration
	I0731 20:59:23.082875  187862 fix.go:54] fixHost starting: 
	I0731 20:59:23.083267  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 20:59:23.083308  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:59:23.101291  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36679
	I0731 20:59:23.101826  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:59:23.102464  187862 main.go:141] libmachine: Using API Version  1
	I0731 20:59:23.102498  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:59:23.102817  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:59:23.103024  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:23.103187  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetState
	I0731 20:59:23.105117  187862 fix.go:112] recreateIfNeeded on embed-certs-831240: state=Stopped err=<nil>
	I0731 20:59:23.105143  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	W0731 20:59:23.105307  187862 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 20:59:23.106919  187862 out.go:177] * Restarting existing kvm2 VM for "embed-certs-831240" ...
	I0731 20:59:21.708296  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.708811  188656 main.go:141] libmachine: (old-k8s-version-239115) Found IP for machine: 192.168.61.51
	I0731 20:59:21.708846  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has current primary IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.708860  188656 main.go:141] libmachine: (old-k8s-version-239115) Reserving static IP address...
	I0731 20:59:21.709432  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "old-k8s-version-239115", mac: "52:54:00:5a:70:0d", ip: "192.168.61.51"} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.709663  188656 main.go:141] libmachine: (old-k8s-version-239115) Reserved static IP address: 192.168.61.51
	I0731 20:59:21.709695  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | skip adding static IP to network mk-old-k8s-version-239115 - found existing host DHCP lease matching {name: "old-k8s-version-239115", mac: "52:54:00:5a:70:0d", ip: "192.168.61.51"}
	I0731 20:59:21.709711  188656 main.go:141] libmachine: (old-k8s-version-239115) Waiting for SSH to be available...
	I0731 20:59:21.709723  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | Getting to WaitForSSH function...
	I0731 20:59:21.711911  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.712310  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.712345  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.712517  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | Using SSH client type: external
	I0731 20:59:21.712540  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa (-rw-------)
	I0731 20:59:21.712581  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:59:21.712598  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | About to run SSH command:
	I0731 20:59:21.712625  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | exit 0
	I0731 20:59:21.838026  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | SSH cmd err, output: <nil>: 
	I0731 20:59:21.838370  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetConfigRaw
	I0731 20:59:21.839169  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetIP
	I0731 20:59:21.842168  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.842588  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.842623  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.842866  188656 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/config.json ...
	I0731 20:59:21.843126  188656 machine.go:94] provisionDockerMachine start ...
	I0731 20:59:21.843150  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:21.843388  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:21.846148  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.846657  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.846686  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.846993  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:21.847165  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:21.847360  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:21.847530  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:21.847707  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:21.847938  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:21.847951  188656 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 20:59:21.955109  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 20:59:21.955143  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetMachineName
	I0731 20:59:21.955460  188656 buildroot.go:166] provisioning hostname "old-k8s-version-239115"
	I0731 20:59:21.955492  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetMachineName
	I0731 20:59:21.955728  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:21.958752  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.959146  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:21.959176  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:21.959395  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:21.959620  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:21.959781  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:21.959918  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:21.960078  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:21.960358  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:21.960378  188656 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-239115 && echo "old-k8s-version-239115" | sudo tee /etc/hostname
	I0731 20:59:22.090625  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-239115
	
	I0731 20:59:22.090665  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.093927  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.094356  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.094387  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.094729  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.094942  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.095153  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.095364  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.095583  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:22.095819  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:22.095845  188656 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-239115' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-239115/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-239115' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:59:22.217153  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:59:22.217189  188656 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 20:59:22.217215  188656 buildroot.go:174] setting up certificates
	I0731 20:59:22.217229  188656 provision.go:84] configureAuth start
	I0731 20:59:22.217242  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetMachineName
	I0731 20:59:22.217613  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetIP
	I0731 20:59:22.220640  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.221082  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.221125  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.221237  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.223811  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.224152  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.224180  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.224337  188656 provision.go:143] copyHostCerts
	I0731 20:59:22.224405  188656 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 20:59:22.224418  188656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 20:59:22.224485  188656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 20:59:22.224604  188656 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 20:59:22.224616  188656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 20:59:22.224654  188656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 20:59:22.224729  188656 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 20:59:22.224740  188656 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 20:59:22.224766  188656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 20:59:22.224833  188656 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-239115 san=[127.0.0.1 192.168.61.51 localhost minikube old-k8s-version-239115]
	I0731 20:59:22.407532  188656 provision.go:177] copyRemoteCerts
	I0731 20:59:22.407599  188656 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:59:22.407625  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.410594  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.411007  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.411033  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.411338  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.411582  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.411811  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.412007  188656 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa Username:docker}
	I0731 20:59:22.492781  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:59:22.518278  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0731 20:59:22.543018  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 20:59:22.568888  188656 provision.go:87] duration metric: took 351.643ms to configureAuth
	I0731 20:59:22.568920  188656 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:59:22.569099  188656 config.go:182] Loaded profile config "old-k8s-version-239115": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 20:59:22.569169  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.572154  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.572471  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.572500  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.572669  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.572872  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.572993  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.573112  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.573249  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:22.573481  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:22.573512  188656 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:59:22.847156  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:59:22.847193  188656 machine.go:97] duration metric: took 1.004049055s to provisionDockerMachine
	I0731 20:59:22.847211  188656 start.go:293] postStartSetup for "old-k8s-version-239115" (driver="kvm2")
	I0731 20:59:22.847229  188656 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:59:22.847284  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:22.847710  188656 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:59:22.847741  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.850515  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.850935  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.850962  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.851088  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.851288  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.851524  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.851674  188656 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa Username:docker}
	I0731 20:59:22.932316  188656 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:59:22.936672  188656 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:59:22.936707  188656 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 20:59:22.936792  188656 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 20:59:22.936894  188656 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 20:59:22.937011  188656 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:59:22.946454  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:22.972952  188656 start.go:296] duration metric: took 125.72216ms for postStartSetup
	I0731 20:59:22.972996  188656 fix.go:56] duration metric: took 22.554695114s for fixHost
	I0731 20:59:22.973026  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:22.975758  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.976166  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:22.976198  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:22.976320  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:22.976585  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.976782  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:22.976966  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:22.977115  188656 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:22.977275  188656 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.51 22 <nil> <nil>}
	I0731 20:59:22.977284  188656 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:59:23.082657  188656 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722459563.026856067
	
	I0731 20:59:23.082683  188656 fix.go:216] guest clock: 1722459563.026856067
	I0731 20:59:23.082694  188656 fix.go:229] Guest: 2024-07-31 20:59:23.026856067 +0000 UTC Remote: 2024-07-31 20:59:22.973000729 +0000 UTC m=+249.171273714 (delta=53.855338ms)
	I0731 20:59:23.082721  188656 fix.go:200] guest clock delta is within tolerance: 53.855338ms
	I0731 20:59:23.082727  188656 start.go:83] releasing machines lock for "old-k8s-version-239115", held for 22.664459101s
	I0731 20:59:23.082752  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:23.083052  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetIP
	I0731 20:59:23.086626  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.087093  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:23.087135  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.087366  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:23.087954  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:23.088159  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .DriverName
	I0731 20:59:23.088251  188656 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:59:23.088303  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:23.088370  188656 ssh_runner.go:195] Run: cat /version.json
	I0731 20:59:23.088392  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHHostname
	I0731 20:59:23.091710  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.091989  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.092073  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:23.092101  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.092227  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:23.092429  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:23.092472  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:23.092520  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:23.092618  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:23.092752  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHPort
	I0731 20:59:23.092803  188656 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa Username:docker}
	I0731 20:59:23.092931  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHKeyPath
	I0731 20:59:23.093100  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetSSHUsername
	I0731 20:59:23.093255  188656 sshutil.go:53] new ssh client: &{IP:192.168.61.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/old-k8s-version-239115/id_rsa Username:docker}
	I0731 20:59:23.175012  188656 ssh_runner.go:195] Run: systemctl --version
	I0731 20:59:23.200192  188656 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:59:23.348227  188656 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:59:23.355109  188656 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:59:23.355195  188656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:59:23.371683  188656 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:59:23.371707  188656 start.go:495] detecting cgroup driver to use...
	I0731 20:59:23.371786  188656 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:59:23.388727  188656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:59:23.408830  188656 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:59:23.408907  188656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:59:23.423594  188656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:59:23.437876  188656 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:59:23.559105  188656 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:59:23.743186  188656 docker.go:233] disabling docker service ...
	I0731 20:59:23.743253  188656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:59:23.758053  188656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:59:23.779951  188656 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:59:20.089173  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:22.092138  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:23.919494  188656 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:59:24.057230  188656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:59:24.072687  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:59:24.094528  188656 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 20:59:24.094600  188656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:24.106579  188656 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:59:24.106634  188656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:24.120079  188656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:24.130759  188656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:24.142925  188656 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:59:24.154760  188656 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:59:24.165059  188656 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:59:24.165113  188656 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:59:24.179567  188656 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:59:24.191838  188656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:24.339078  188656 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:59:24.515723  188656 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:59:24.515810  188656 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:59:24.521882  188656 start.go:563] Will wait 60s for crictl version
	I0731 20:59:24.521966  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:24.527655  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:59:24.581055  188656 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:59:24.581151  188656 ssh_runner.go:195] Run: crio --version
	I0731 20:59:24.623207  188656 ssh_runner.go:195] Run: crio --version
	I0731 20:59:24.662956  188656 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0731 20:59:22.780311  188266 node_ready.go:53] node "default-k8s-diff-port-125614" has status "Ready":"False"
	I0731 20:59:23.281324  188266 node_ready.go:49] node "default-k8s-diff-port-125614" has status "Ready":"True"
	I0731 20:59:23.281373  188266 node_ready.go:38] duration metric: took 7.005540469s for node "default-k8s-diff-port-125614" to be "Ready" ...
	I0731 20:59:23.281387  188266 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:59:23.291207  188266 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.299173  188266 pod_ready.go:92] pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:23.299202  188266 pod_ready.go:81] duration metric: took 7.971632ms for pod "coredns-7db6d8ff4d-gnrgs" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.299215  188266 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.307561  188266 pod_ready.go:92] pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:23.307580  188266 pod_ready.go:81] duration metric: took 8.357239ms for pod "etcd-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.307589  188266 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.314466  188266 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:23.314544  188266 pod_ready.go:81] duration metric: took 6.946044ms for pod "kube-apiserver-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:23.314565  188266 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:25.323341  188266 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:23.108292  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Start
	I0731 20:59:23.108473  187862 main.go:141] libmachine: (embed-certs-831240) Ensuring networks are active...
	I0731 20:59:23.109160  187862 main.go:141] libmachine: (embed-certs-831240) Ensuring network default is active
	I0731 20:59:23.109575  187862 main.go:141] libmachine: (embed-certs-831240) Ensuring network mk-embed-certs-831240 is active
	I0731 20:59:23.110032  187862 main.go:141] libmachine: (embed-certs-831240) Getting domain xml...
	I0731 20:59:23.110762  187862 main.go:141] libmachine: (embed-certs-831240) Creating domain...
	I0731 20:59:24.457926  187862 main.go:141] libmachine: (embed-certs-831240) Waiting to get IP...
	I0731 20:59:24.458936  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:24.459381  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:24.459477  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:24.459375  189758 retry.go:31] will retry after 266.695372ms: waiting for machine to come up
	I0731 20:59:24.727938  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:24.728394  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:24.728532  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:24.728451  189758 retry.go:31] will retry after 349.84093ms: waiting for machine to come up
	I0731 20:59:25.080044  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:25.080634  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:25.080668  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:25.080592  189758 retry.go:31] will retry after 324.555122ms: waiting for machine to come up
	I0731 20:59:25.407332  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:25.407852  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:25.407877  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:25.407795  189758 retry.go:31] will retry after 580.815897ms: waiting for machine to come up
	I0731 20:59:25.990957  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:25.991551  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:25.991578  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:25.991468  189758 retry.go:31] will retry after 570.045476ms: waiting for machine to come up
	I0731 20:59:26.563493  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:26.563901  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:26.563931  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:26.563853  189758 retry.go:31] will retry after 582.597352ms: waiting for machine to come up
	I0731 20:59:27.148256  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:27.148744  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:27.148773  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:27.148688  189758 retry.go:31] will retry after 1.105713474s: waiting for machine to come up
	I0731 20:59:24.664851  188656 main.go:141] libmachine: (old-k8s-version-239115) Calling .GetIP
	I0731 20:59:24.668464  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:24.668842  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:70:0d", ip: ""} in network mk-old-k8s-version-239115: {Iface:virbr3 ExpiryTime:2024-07-31 21:59:12 +0000 UTC Type:0 Mac:52:54:00:5a:70:0d Iaid: IPaddr:192.168.61.51 Prefix:24 Hostname:old-k8s-version-239115 Clientid:01:52:54:00:5a:70:0d}
	I0731 20:59:24.668869  188656 main.go:141] libmachine: (old-k8s-version-239115) DBG | domain old-k8s-version-239115 has defined IP address 192.168.61.51 and MAC address 52:54:00:5a:70:0d in network mk-old-k8s-version-239115
	I0731 20:59:24.669103  188656 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0731 20:59:24.674448  188656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:24.690857  188656 kubeadm.go:883] updating cluster {Name:old-k8s-version-239115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-239115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:59:24.691011  188656 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 20:59:24.691056  188656 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:24.744259  188656 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 20:59:24.744348  188656 ssh_runner.go:195] Run: which lz4
	I0731 20:59:24.749358  188656 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 20:59:24.754299  188656 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 20:59:24.754341  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0731 20:59:26.551495  188656 crio.go:462] duration metric: took 1.802206904s to copy over tarball
	I0731 20:59:26.551571  188656 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 20:59:24.589677  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:26.591079  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:29.089923  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:25.824008  188266 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:25.824037  188266 pod_ready.go:81] duration metric: took 2.509461823s for pod "kube-controller-manager-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:25.824052  188266 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-csdc4" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:25.840569  188266 pod_ready.go:92] pod "kube-proxy-csdc4" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:25.840595  188266 pod_ready.go:81] duration metric: took 16.533543ms for pod "kube-proxy-csdc4" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:25.840613  188266 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:26.103726  188266 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace has status "Ready":"True"
	I0731 20:59:26.103759  188266 pod_ready.go:81] duration metric: took 263.1364ms for pod "kube-scheduler-default-k8s-diff-port-125614" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:26.103774  188266 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:28.112583  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:30.610462  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:28.255818  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:28.256478  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:28.256506  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:28.256408  189758 retry.go:31] will retry after 1.3552249s: waiting for machine to come up
	I0731 20:59:29.613070  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:29.613661  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:29.613693  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:29.613620  189758 retry.go:31] will retry after 1.522319436s: waiting for machine to come up
	I0731 20:59:31.138020  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:31.138490  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:31.138522  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:31.138434  189758 retry.go:31] will retry after 1.573723862s: waiting for machine to come up
	I0731 20:59:29.653941  188656 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.102337952s)
	I0731 20:59:29.653974  188656 crio.go:469] duration metric: took 3.102444338s to extract the tarball
	I0731 20:59:29.653982  188656 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 20:59:29.704065  188656 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:29.745966  188656 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 20:59:29.746010  188656 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 20:59:29.746076  188656 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:59:29.746107  188656 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:29.746129  188656 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:29.746149  188656 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0731 20:59:29.746170  188656 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0731 20:59:29.746410  188656 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:29.746423  188656 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:29.746735  188656 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:29.747951  188656 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 20:59:29.747978  188656 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0731 20:59:29.747978  188656 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:29.747998  188656 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:29.748005  188656 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:29.747951  188656 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:59:29.748021  188656 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:29.748091  188656 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:29.915865  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:29.918049  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:29.950840  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:29.952762  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:29.956317  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:29.959905  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0731 20:59:30.000707  188656 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0731 20:59:30.000768  188656 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:30.000821  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.007207  188656 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0731 20:59:30.007251  188656 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:30.007294  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.016613  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0731 20:59:30.082306  188656 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0731 20:59:30.082358  188656 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:30.082364  188656 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0731 20:59:30.082414  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.082418  188656 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:30.082557  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.089299  188656 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0731 20:59:30.089382  188656 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:30.089427  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.105150  188656 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0731 20:59:30.105217  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0731 20:59:30.105246  188656 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0731 20:59:30.105264  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0731 20:59:30.105282  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.129702  188656 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0731 20:59:30.129748  188656 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0731 20:59:30.129779  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0731 20:59:30.129826  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 20:59:30.129853  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0731 20:59:30.129800  188656 ssh_runner.go:195] Run: which crictl
	I0731 20:59:30.188192  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 20:59:30.188243  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0731 20:59:30.188342  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0731 20:59:30.188365  188656 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0731 20:59:30.268231  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0731 20:59:30.268296  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0731 20:59:30.268337  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0731 20:59:30.287822  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0731 20:59:30.287929  188656 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0731 20:59:30.635440  188656 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 20:59:30.776879  188656 cache_images.go:92] duration metric: took 1.030849977s to LoadCachedImages
	W0731 20:59:30.777006  188656 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19355-121704/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0731 20:59:30.777028  188656 kubeadm.go:934] updating node { 192.168.61.51 8443 v1.20.0 crio true true} ...
	I0731 20:59:30.777175  188656 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-239115 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-239115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:59:30.777284  188656 ssh_runner.go:195] Run: crio config
	I0731 20:59:30.832542  188656 cni.go:84] Creating CNI manager for ""
	I0731 20:59:30.832570  188656 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:30.832586  188656 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:59:30.832618  188656 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.51 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-239115 NodeName:old-k8s-version-239115 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 20:59:30.832798  188656 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-239115"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:59:30.832877  188656 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0731 20:59:30.842909  188656 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:59:30.842995  188656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 20:59:30.852951  188656 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0731 20:59:30.872643  188656 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:59:30.889851  188656 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0731 20:59:30.910958  188656 ssh_runner.go:195] Run: grep 192.168.61.51	control-plane.minikube.internal$ /etc/hosts
	I0731 20:59:30.915645  188656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.51	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:30.928698  188656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:31.055628  188656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:59:31.076731  188656 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115 for IP: 192.168.61.51
	I0731 20:59:31.076759  188656 certs.go:194] generating shared ca certs ...
	I0731 20:59:31.076789  188656 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:31.076979  188656 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 20:59:31.077041  188656 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 20:59:31.077057  188656 certs.go:256] generating profile certs ...
	I0731 20:59:31.077175  188656 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/client.key
	I0731 20:59:31.077378  188656 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/apiserver.key.072d7f83
	I0731 20:59:31.077514  188656 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/proxy-client.key
	I0731 20:59:31.077704  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 20:59:31.077789  188656 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 20:59:31.077806  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 20:59:31.077854  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:59:31.077892  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:59:31.077932  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 20:59:31.077997  188656 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:31.078906  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:59:31.126980  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:59:31.167327  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:59:31.211947  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 20:59:31.258307  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 20:59:31.296628  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 20:59:31.342330  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:59:31.391114  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/old-k8s-version-239115/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 20:59:31.415097  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 20:59:31.442595  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 20:59:31.472160  188656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:59:31.497814  188656 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:59:31.515890  188656 ssh_runner.go:195] Run: openssl version
	I0731 20:59:31.523423  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:59:31.537984  188656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:31.544161  188656 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:31.544225  188656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:31.552590  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:59:31.567190  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 20:59:31.581206  188656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 20:59:31.586903  188656 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 20:59:31.586966  188656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 20:59:31.593485  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 20:59:31.606764  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 20:59:31.619748  188656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 20:59:31.624599  188656 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 20:59:31.624681  188656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 20:59:31.631293  188656 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:59:31.642823  188656 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:59:31.647273  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 20:59:31.653142  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 20:59:31.659046  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 20:59:31.665552  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 20:59:31.671454  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 20:59:31.677426  188656 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 20:59:31.683490  188656 kubeadm.go:392] StartCluster: {Name:old-k8s-version-239115 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-239115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.51 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:59:31.683586  188656 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:59:31.683625  188656 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:31.725466  188656 cri.go:89] found id: ""
	I0731 20:59:31.725548  188656 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 20:59:31.737025  188656 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 20:59:31.737050  188656 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 20:59:31.737113  188656 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 20:59:31.747325  188656 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:59:31.748325  188656 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-239115" does not appear in /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 20:59:31.748965  188656 kubeconfig.go:62] /home/jenkins/minikube-integration/19355-121704/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-239115" cluster setting kubeconfig missing "old-k8s-version-239115" context setting]
	I0731 20:59:31.749997  188656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:31.757569  188656 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 20:59:31.771188  188656 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.51
	I0731 20:59:31.771222  188656 kubeadm.go:1160] stopping kube-system containers ...
	I0731 20:59:31.771236  188656 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 20:59:31.771292  188656 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:31.811574  188656 cri.go:89] found id: ""
	I0731 20:59:31.811653  188656 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 20:59:31.829930  188656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:59:31.840145  188656 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:59:31.840165  188656 kubeadm.go:157] found existing configuration files:
	
	I0731 20:59:31.840206  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 20:59:31.851266  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:59:31.851340  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:59:31.861634  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 20:59:31.871532  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:59:31.871605  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:59:31.882164  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 20:59:31.892222  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:59:31.892291  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:59:31.903299  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 20:59:31.916163  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:59:31.916235  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:59:31.929423  188656 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 20:59:31.942668  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:32.107220  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:32.953249  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:33.207806  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:33.307640  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:33.410338  188656 api_server.go:52] waiting for apiserver process to appear ...
	I0731 20:59:33.410444  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:31.221009  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:33.589275  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:32.612024  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:35.109601  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:32.713632  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:32.714137  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:32.714169  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:32.714064  189758 retry.go:31] will retry after 2.013485748s: waiting for machine to come up
	I0731 20:59:34.729625  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:34.730006  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:34.730070  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:34.729970  189758 retry.go:31] will retry after 2.193072749s: waiting for machine to come up
	I0731 20:59:36.924345  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:36.924990  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:36.925008  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:36.924940  189758 retry.go:31] will retry after 3.394781674s: waiting for machine to come up
	I0731 20:59:33.910958  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:34.411011  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:34.911110  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:35.410715  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:35.911117  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:36.410825  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:36.911311  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:37.410757  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:37.910786  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:38.410821  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:36.089622  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:38.589435  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:37.110446  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:39.111323  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:40.322463  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:40.322827  187862 main.go:141] libmachine: (embed-certs-831240) DBG | unable to find current IP address of domain embed-certs-831240 in network mk-embed-certs-831240
	I0731 20:59:40.322857  187862 main.go:141] libmachine: (embed-certs-831240) DBG | I0731 20:59:40.322774  189758 retry.go:31] will retry after 3.836613891s: waiting for machine to come up
	I0731 20:59:38.910891  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:39.411547  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:39.911260  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:40.411404  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:40.910719  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:41.411449  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:41.910643  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:42.410967  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:42.910703  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:43.411187  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:41.088768  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:43.589256  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:41.609891  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:44.111379  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:44.160516  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.161009  187862 main.go:141] libmachine: (embed-certs-831240) Found IP for machine: 192.168.39.92
	I0731 20:59:44.161029  187862 main.go:141] libmachine: (embed-certs-831240) Reserving static IP address...
	I0731 20:59:44.161041  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has current primary IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.161561  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "embed-certs-831240", mac: "52:54:00:ff:69:a6", ip: "192.168.39.92"} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.161594  187862 main.go:141] libmachine: (embed-certs-831240) DBG | skip adding static IP to network mk-embed-certs-831240 - found existing host DHCP lease matching {name: "embed-certs-831240", mac: "52:54:00:ff:69:a6", ip: "192.168.39.92"}
	I0731 20:59:44.161609  187862 main.go:141] libmachine: (embed-certs-831240) Reserved static IP address: 192.168.39.92
	I0731 20:59:44.161623  187862 main.go:141] libmachine: (embed-certs-831240) Waiting for SSH to be available...
	I0731 20:59:44.161638  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Getting to WaitForSSH function...
	I0731 20:59:44.163936  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.164285  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.164318  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.164447  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Using SSH client type: external
	I0731 20:59:44.164479  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa (-rw-------)
	I0731 20:59:44.164499  187862 main.go:141] libmachine: (embed-certs-831240) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.92 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 20:59:44.164510  187862 main.go:141] libmachine: (embed-certs-831240) DBG | About to run SSH command:
	I0731 20:59:44.164544  187862 main.go:141] libmachine: (embed-certs-831240) DBG | exit 0
	I0731 20:59:44.293463  187862 main.go:141] libmachine: (embed-certs-831240) DBG | SSH cmd err, output: <nil>: 
	I0731 20:59:44.293819  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetConfigRaw
	I0731 20:59:44.294490  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetIP
	I0731 20:59:44.296982  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.297351  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.297381  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.297634  187862 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/config.json ...
	I0731 20:59:44.297877  187862 machine.go:94] provisionDockerMachine start ...
	I0731 20:59:44.297897  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:44.298116  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.300452  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.300806  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.300829  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.300953  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:44.301146  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.301308  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.301439  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:44.301634  187862 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:44.301811  187862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0731 20:59:44.301823  187862 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 20:59:44.418065  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 20:59:44.418105  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetMachineName
	I0731 20:59:44.418428  187862 buildroot.go:166] provisioning hostname "embed-certs-831240"
	I0731 20:59:44.418446  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetMachineName
	I0731 20:59:44.418666  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.421984  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.422403  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.422434  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.422568  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:44.422733  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.422893  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.423023  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:44.423208  187862 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:44.423371  187862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0731 20:59:44.423410  187862 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-831240 && echo "embed-certs-831240" | sudo tee /etc/hostname
	I0731 20:59:44.549670  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-831240
	
	I0731 20:59:44.549697  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.552503  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.552851  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.552876  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.553017  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:44.553200  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.553398  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.553533  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:44.553721  187862 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:44.554012  187862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0731 20:59:44.554039  187862 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-831240' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-831240/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-831240' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 20:59:44.674662  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 20:59:44.674693  187862 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-121704/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-121704/.minikube}
	I0731 20:59:44.674713  187862 buildroot.go:174] setting up certificates
	I0731 20:59:44.674723  187862 provision.go:84] configureAuth start
	I0731 20:59:44.674733  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetMachineName
	I0731 20:59:44.675011  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetIP
	I0731 20:59:44.677631  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.677911  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.677951  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.678081  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.679869  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.680177  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.680205  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.680332  187862 provision.go:143] copyHostCerts
	I0731 20:59:44.680391  187862 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem, removing ...
	I0731 20:59:44.680401  187862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem
	I0731 20:59:44.680450  187862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/ca.pem (1082 bytes)
	I0731 20:59:44.680537  187862 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem, removing ...
	I0731 20:59:44.680545  187862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem
	I0731 20:59:44.680564  187862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/cert.pem (1123 bytes)
	I0731 20:59:44.680628  187862 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem, removing ...
	I0731 20:59:44.680635  187862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem
	I0731 20:59:44.680652  187862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-121704/.minikube/key.pem (1675 bytes)
	I0731 20:59:44.680711  187862 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem org=jenkins.embed-certs-831240 san=[127.0.0.1 192.168.39.92 embed-certs-831240 localhost minikube]
	I0731 20:59:44.733872  187862 provision.go:177] copyRemoteCerts
	I0731 20:59:44.733927  187862 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 20:59:44.733951  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.736399  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.736731  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.736758  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.736935  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:44.737131  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.737273  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:44.737430  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 20:59:44.824050  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 20:59:44.847699  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0731 20:59:44.872138  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 20:59:44.896013  187862 provision.go:87] duration metric: took 221.275458ms to configureAuth
	I0731 20:59:44.896042  187862 buildroot.go:189] setting minikube options for container-runtime
	I0731 20:59:44.896234  187862 config.go:182] Loaded profile config "embed-certs-831240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:59:44.896327  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:44.898820  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.899206  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:44.899232  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:44.899457  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:44.899660  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.899822  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:44.899993  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:44.900216  187862 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:44.900438  187862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0731 20:59:44.900462  187862 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 20:59:45.179165  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 20:59:45.179194  187862 machine.go:97] duration metric: took 881.302407ms to provisionDockerMachine
	I0731 20:59:45.179213  187862 start.go:293] postStartSetup for "embed-certs-831240" (driver="kvm2")
	I0731 20:59:45.179226  187862 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 20:59:45.179252  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:45.179615  187862 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 20:59:45.179646  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:45.182617  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.183047  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:45.183069  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.183284  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:45.183510  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:45.183654  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:45.183805  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 20:59:45.273492  187862 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 20:59:45.277593  187862 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 20:59:45.277618  187862 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/addons for local assets ...
	I0731 20:59:45.277687  187862 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-121704/.minikube/files for local assets ...
	I0731 20:59:45.277782  187862 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem -> 1288912.pem in /etc/ssl/certs
	I0731 20:59:45.277889  187862 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 20:59:45.288172  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:45.311763  187862 start.go:296] duration metric: took 132.534326ms for postStartSetup
	I0731 20:59:45.311803  187862 fix.go:56] duration metric: took 22.228928797s for fixHost
	I0731 20:59:45.311827  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:45.314578  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.314962  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:45.314998  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.315146  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:45.315381  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:45.315549  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:45.315681  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:45.315868  187862 main.go:141] libmachine: Using SSH client type: native
	I0731 20:59:45.316035  187862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.92 22 <nil> <nil>}
	I0731 20:59:45.316045  187862 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 20:59:45.426289  187862 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722459585.381297707
	
	I0731 20:59:45.426314  187862 fix.go:216] guest clock: 1722459585.381297707
	I0731 20:59:45.426324  187862 fix.go:229] Guest: 2024-07-31 20:59:45.381297707 +0000 UTC Remote: 2024-07-31 20:59:45.311808006 +0000 UTC m=+363.090091892 (delta=69.489701ms)
	I0731 20:59:45.426379  187862 fix.go:200] guest clock delta is within tolerance: 69.489701ms
	I0731 20:59:45.426387  187862 start.go:83] releasing machines lock for "embed-certs-831240", held for 22.343543995s
	I0731 20:59:45.426419  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:45.426684  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetIP
	I0731 20:59:45.429330  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.429757  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:45.429785  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.429952  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:45.430453  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:45.430671  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 20:59:45.430790  187862 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 20:59:45.430854  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:45.430905  187862 ssh_runner.go:195] Run: cat /version.json
	I0731 20:59:45.430943  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 20:59:45.433850  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.434108  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.434192  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:45.434222  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.434385  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:45.434580  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:45.434584  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:45.434611  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:45.434760  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:45.434768  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 20:59:45.434939  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 20:59:45.434929  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 20:59:45.435099  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 20:59:45.435243  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 20:59:45.542122  187862 ssh_runner.go:195] Run: systemctl --version
	I0731 20:59:45.548583  187862 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 20:59:45.690235  187862 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 20:59:45.696897  187862 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 20:59:45.696986  187862 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 20:59:45.714456  187862 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 20:59:45.714480  187862 start.go:495] detecting cgroup driver to use...
	I0731 20:59:45.714546  187862 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 20:59:45.732184  187862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 20:59:45.747047  187862 docker.go:217] disabling cri-docker service (if available) ...
	I0731 20:59:45.747104  187862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 20:59:45.761152  187862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 20:59:45.775267  187862 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 20:59:45.890891  187862 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 20:59:46.043503  187862 docker.go:233] disabling docker service ...
	I0731 20:59:46.043577  187862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 20:59:46.058174  187862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 20:59:46.070900  187862 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 20:59:46.209527  187862 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 20:59:46.343868  187862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 20:59:46.357583  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 20:59:46.375819  187862 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 20:59:46.375875  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.386762  187862 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 20:59:46.386844  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.397495  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.407654  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.418326  187862 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 20:59:46.428983  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.439530  187862 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.457956  187862 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 20:59:46.468003  187862 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 20:59:46.477332  187862 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 20:59:46.477400  187862 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 20:59:46.490886  187862 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 20:59:46.500516  187862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:46.617952  187862 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 20:59:46.761978  187862 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 20:59:46.762088  187862 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 20:59:46.767210  187862 start.go:563] Will wait 60s for crictl version
	I0731 20:59:46.767275  187862 ssh_runner.go:195] Run: which crictl
	I0731 20:59:46.771502  187862 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 20:59:46.810894  187862 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 20:59:46.810976  187862 ssh_runner.go:195] Run: crio --version
	I0731 20:59:46.839234  187862 ssh_runner.go:195] Run: crio --version
	I0731 20:59:46.871209  187862 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 20:59:46.872648  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetIP
	I0731 20:59:46.875374  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:46.875683  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 20:59:46.875698  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 20:59:46.875900  187862 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 20:59:46.880402  187862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:46.894098  187862 kubeadm.go:883] updating cluster {Name:embed-certs-831240 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-831240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 20:59:46.894238  187862 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 20:59:46.894300  187862 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:46.937003  187862 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 20:59:46.937079  187862 ssh_runner.go:195] Run: which lz4
	I0731 20:59:46.941158  187862 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 20:59:46.945395  187862 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 20:59:46.945425  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 20:59:43.910997  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:44.410783  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:44.911365  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:45.410690  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:45.911150  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:46.411384  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:46.910579  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:47.411171  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:47.910578  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:48.411377  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:45.589690  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:47.591464  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:46.608955  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:48.611634  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:50.615557  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:48.414703  187862 crio.go:462] duration metric: took 1.473569222s to copy over tarball
	I0731 20:59:48.414789  187862 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 20:59:50.666750  187862 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.251926888s)
	I0731 20:59:50.666783  187862 crio.go:469] duration metric: took 2.252043688s to extract the tarball
	I0731 20:59:50.666793  187862 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 20:59:50.707188  187862 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 20:59:50.749781  187862 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 20:59:50.749808  187862 cache_images.go:84] Images are preloaded, skipping loading
	I0731 20:59:50.749817  187862 kubeadm.go:934] updating node { 192.168.39.92 8443 v1.30.3 crio true true} ...
	I0731 20:59:50.749923  187862 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-831240 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-831240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 20:59:50.749998  187862 ssh_runner.go:195] Run: crio config
	I0731 20:59:50.797191  187862 cni.go:84] Creating CNI manager for ""
	I0731 20:59:50.797214  187862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:50.797227  187862 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 20:59:50.797253  187862 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.92 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-831240 NodeName:embed-certs-831240 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.92"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.92 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 20:59:50.797484  187862 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.92
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-831240"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.92
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.92"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 20:59:50.797556  187862 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 20:59:50.808170  187862 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 20:59:50.808236  187862 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 20:59:50.817847  187862 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0731 20:59:50.834107  187862 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 20:59:50.849722  187862 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0731 20:59:50.866599  187862 ssh_runner.go:195] Run: grep 192.168.39.92	control-plane.minikube.internal$ /etc/hosts
	I0731 20:59:50.870727  187862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.92	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 20:59:50.884490  187862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 20:59:51.043488  187862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 20:59:51.064792  187862 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240 for IP: 192.168.39.92
	I0731 20:59:51.064816  187862 certs.go:194] generating shared ca certs ...
	I0731 20:59:51.064836  187862 certs.go:226] acquiring lock for ca certs: {Name:mk321e55c51459eef33684f5f907e6de3f519b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 20:59:51.065142  187862 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key
	I0731 20:59:51.065225  187862 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key
	I0731 20:59:51.065254  187862 certs.go:256] generating profile certs ...
	I0731 20:59:51.065443  187862 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/client.key
	I0731 20:59:51.065571  187862 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/apiserver.key.4e545c52
	I0731 20:59:51.065639  187862 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/proxy-client.key
	I0731 20:59:51.065798  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem (1338 bytes)
	W0731 20:59:51.065846  187862 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891_empty.pem, impossibly tiny 0 bytes
	I0731 20:59:51.065857  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 20:59:51.065883  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/ca.pem (1082 bytes)
	I0731 20:59:51.065909  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/cert.pem (1123 bytes)
	I0731 20:59:51.065929  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/certs/key.pem (1675 bytes)
	I0731 20:59:51.065971  187862 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem (1708 bytes)
	I0731 20:59:51.066633  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 20:59:51.107287  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 20:59:51.138745  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 20:59:51.176139  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 20:59:51.211344  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0731 20:59:51.241050  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 20:59:51.269307  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 20:59:51.293184  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/embed-certs-831240/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 20:59:51.316745  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 20:59:51.343620  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/certs/128891.pem --> /usr/share/ca-certificates/128891.pem (1338 bytes)
	I0731 20:59:51.367293  187862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/ssl/certs/1288912.pem --> /usr/share/ca-certificates/1288912.pem (1708 bytes)
	I0731 20:59:51.391789  187862 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 20:59:51.413821  187862 ssh_runner.go:195] Run: openssl version
	I0731 20:59:51.420455  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128891.pem && ln -fs /usr/share/ca-certificates/128891.pem /etc/ssl/certs/128891.pem"
	I0731 20:59:51.431721  187862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128891.pem
	I0731 20:59:51.436672  187862 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:42 /usr/share/ca-certificates/128891.pem
	I0731 20:59:51.436724  187862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128891.pem
	I0731 20:59:51.442604  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/128891.pem /etc/ssl/certs/51391683.0"
	I0731 20:59:51.453601  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1288912.pem && ln -fs /usr/share/ca-certificates/1288912.pem /etc/ssl/certs/1288912.pem"
	I0731 20:59:51.464109  187862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1288912.pem
	I0731 20:59:51.468598  187862 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:42 /usr/share/ca-certificates/1288912.pem
	I0731 20:59:51.468648  187862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1288912.pem
	I0731 20:59:51.474333  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1288912.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 20:59:51.484758  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 20:59:51.495093  187862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:51.499557  187862 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:29 /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:51.499605  187862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 20:59:51.505244  187862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 20:59:51.515545  187862 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 20:59:51.519923  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 20:59:51.525696  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 20:59:51.531430  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 20:59:51.537082  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 20:59:51.542713  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 20:59:51.548206  187862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 20:59:51.553705  187862 kubeadm.go:392] StartCluster: {Name:embed-certs-831240 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-831240 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 20:59:51.553793  187862 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 20:59:51.553841  187862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:51.592396  187862 cri.go:89] found id: ""
	I0731 20:59:51.592472  187862 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 20:59:51.602510  187862 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 20:59:51.602528  187862 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 20:59:51.602578  187862 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 20:59:51.612384  187862 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:59:51.613530  187862 kubeconfig.go:125] found "embed-certs-831240" server: "https://192.168.39.92:8443"
	I0731 20:59:51.615991  187862 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 20:59:51.625205  187862 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.92
	I0731 20:59:51.625239  187862 kubeadm.go:1160] stopping kube-system containers ...
	I0731 20:59:51.625253  187862 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 20:59:51.625307  187862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 20:59:51.663278  187862 cri.go:89] found id: ""
	I0731 20:59:51.663370  187862 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 20:59:51.678876  187862 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 20:59:51.688071  187862 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 20:59:51.688092  187862 kubeadm.go:157] found existing configuration files:
	
	I0731 20:59:51.688139  187862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 20:59:51.696441  187862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 20:59:51.696494  187862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 20:59:51.705310  187862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 20:59:51.713545  187862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 20:59:51.713599  187862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 20:59:51.723512  187862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 20:59:51.732304  187862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 20:59:51.732380  187862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 20:59:51.741301  187862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 20:59:51.749537  187862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 20:59:51.749583  187862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 20:59:51.758609  187862 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 20:59:51.774450  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:51.888916  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:48.910784  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:49.411137  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:49.911453  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:50.411128  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:50.911431  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:51.410483  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:51.910975  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:52.411519  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:52.911079  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:53.410802  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:50.094603  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:52.589951  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:53.424691  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:55.609675  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:52.666705  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:52.899759  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:52.975806  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:53.050422  187862 api_server.go:52] waiting for apiserver process to appear ...
	I0731 20:59:53.050493  187862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:53.551073  187862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:54.051427  187862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:54.551268  187862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:54.570361  187862 api_server.go:72] duration metric: took 1.519937245s to wait for apiserver process to appear ...
	I0731 20:59:54.570389  187862 api_server.go:88] waiting for apiserver healthz status ...
	I0731 20:59:54.570414  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:53.911405  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:54.410870  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:54.911330  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:55.411491  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:55.911380  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:56.411483  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:56.910602  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:57.411228  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:57.910486  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:58.411198  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:57.260421  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:57.260455  187862 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:57.260469  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:57.284265  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 20:59:57.284301  187862 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 20:59:57.570976  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:57.575616  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:57.575644  187862 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:58.071247  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:58.075871  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:58.075903  187862 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:58.570906  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:58.581990  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 20:59:58.582038  187862 api_server.go:103] status: https://192.168.39.92:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 20:59:59.070528  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 20:59:59.074787  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 200:
	ok
	I0731 20:59:59.081502  187862 api_server.go:141] control plane version: v1.30.3
	I0731 20:59:59.081541  187862 api_server.go:131] duration metric: took 4.511132973s to wait for apiserver health ...
	I0731 20:59:59.081552  187862 cni.go:84] Creating CNI manager for ""
	I0731 20:59:59.081561  187862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 20:59:59.083504  187862 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 20:59:55.089279  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:57.589380  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 20:59:59.084894  187862 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 20:59:59.098139  187862 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 20:59:59.118458  187862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 20:59:59.128022  187862 system_pods.go:59] 8 kube-system pods found
	I0731 20:59:59.128061  187862 system_pods.go:61] "coredns-7db6d8ff4d-2ks55" [f5ad9d76-5cdc-430e-8933-7e72a2dda95f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0731 20:59:59.128071  187862 system_pods.go:61] "etcd-embed-certs-831240" [5236ad06-90d9-48f1-964a-efa8f56ee8b5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 20:59:59.128082  187862 system_pods.go:61] "kube-apiserver-embed-certs-831240" [06290f48-0a7b-4d88-9e61-af4c7dde9acd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 20:59:59.128100  187862 system_pods.go:61] "kube-controller-manager-embed-certs-831240" [5d669fab-16cd-4bc6-a880-a1ecfb5d55b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 20:59:59.128113  187862 system_pods.go:61] "kube-proxy-x662j" [9ad0d8a8-94b4-4f3e-b5da-4e5585c28d21] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 20:59:59.128121  187862 system_pods.go:61] "kube-scheduler-embed-certs-831240" [cf9c5922-6468-46e7-84c1-1841f5bc3446] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 20:59:59.128134  187862 system_pods.go:61] "metrics-server-569cc877fc-slbkm" [f93f674b-1f0e-443b-ac06-9c2a5234eeea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 20:59:59.128145  187862 system_pods.go:61] "storage-provisioner" [d3d5fa24-96e8-4ab5-9887-62ff8b82f21d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 20:59:59.128156  187862 system_pods.go:74] duration metric: took 9.673815ms to wait for pod list to return data ...
	I0731 20:59:59.128168  187862 node_conditions.go:102] verifying NodePressure condition ...
	I0731 20:59:59.131825  187862 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 20:59:59.131853  187862 node_conditions.go:123] node cpu capacity is 2
	I0731 20:59:59.131865  187862 node_conditions.go:105] duration metric: took 3.691724ms to run NodePressure ...
	I0731 20:59:59.131897  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 20:59:59.494923  187862 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 20:59:59.501848  187862 kubeadm.go:739] kubelet initialised
	I0731 20:59:59.501875  187862 kubeadm.go:740] duration metric: took 6.920816ms waiting for restarted kubelet to initialise ...
	I0731 20:59:59.501885  187862 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 20:59:59.510503  187862 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:59.518204  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.518234  187862 pod_ready.go:81] duration metric: took 7.702873ms for pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:59.518247  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.518263  187862 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:59.523236  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "etcd-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.523258  187862 pod_ready.go:81] duration metric: took 4.985299ms for pod "etcd-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:59.523266  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "etcd-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.523275  187862 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:59.535237  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.535256  187862 pod_ready.go:81] duration metric: took 11.97449ms for pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:59.535270  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.535275  187862 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:59.541512  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.541531  187862 pod_ready.go:81] duration metric: took 6.24797ms for pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:59.541539  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.541545  187862 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-x662j" in "kube-system" namespace to be "Ready" ...
	I0731 20:59:59.922722  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "kube-proxy-x662j" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.922757  187862 pod_ready.go:81] duration metric: took 381.203526ms for pod "kube-proxy-x662j" in "kube-system" namespace to be "Ready" ...
	E0731 20:59:59.922771  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "kube-proxy-x662j" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 20:59:59.922779  187862 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:00.322049  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:00.322077  187862 pod_ready.go:81] duration metric: took 399.289505ms for pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	E0731 21:00:00.322088  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:00.322094  187862 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:00.722961  187862 pod_ready.go:97] node "embed-certs-831240" hosting pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:00.722993  187862 pod_ready.go:81] duration metric: took 400.88956ms for pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace to be "Ready" ...
	E0731 21:00:00.723008  187862 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-831240" hosting pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:00.723017  187862 pod_ready.go:38] duration metric: took 1.221112347s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:00:00.723050  187862 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:00:00.735642  187862 ops.go:34] apiserver oom_adj: -16
	I0731 21:00:00.735697  187862 kubeadm.go:597] duration metric: took 9.133136671s to restartPrimaryControlPlane
	I0731 21:00:00.735735  187862 kubeadm.go:394] duration metric: took 9.182030801s to StartCluster
	I0731 21:00:00.735764  187862 settings.go:142] acquiring lock: {Name:mk1f43113b3e416d694056e4890ac819b020378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:00:00.735860  187862 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 21:00:00.737955  187862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:00:00.738247  187862 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.92 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:00:00.738329  187862 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:00:00.738418  187862 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-831240"
	I0731 21:00:00.738432  187862 addons.go:69] Setting default-storageclass=true in profile "embed-certs-831240"
	I0731 21:00:00.738463  187862 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-831240"
	W0731 21:00:00.738475  187862 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:00:00.738481  187862 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-831240"
	I0731 21:00:00.738513  187862 host.go:66] Checking if "embed-certs-831240" exists ...
	I0731 21:00:00.738547  187862 config.go:182] Loaded profile config "embed-certs-831240": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:00:00.738581  187862 addons.go:69] Setting metrics-server=true in profile "embed-certs-831240"
	I0731 21:00:00.738651  187862 addons.go:234] Setting addon metrics-server=true in "embed-certs-831240"
	W0731 21:00:00.738666  187862 addons.go:243] addon metrics-server should already be in state true
	I0731 21:00:00.738735  187862 host.go:66] Checking if "embed-certs-831240" exists ...
	I0731 21:00:00.738818  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.738858  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.738897  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.738960  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.739144  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.739190  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.740244  187862 out.go:177] * Verifying Kubernetes components...
	I0731 21:00:00.746003  187862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:00:00.755735  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35867
	I0731 21:00:00.755773  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38437
	I0731 21:00:00.756268  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.756271  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.756594  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35939
	I0731 21:00:00.756820  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.756847  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.756892  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.756917  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.757069  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.757228  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.757254  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.757458  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetState
	I0731 21:00:00.757638  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.757668  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.757745  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.757774  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.758005  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.758543  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.758586  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.761553  187862 addons.go:234] Setting addon default-storageclass=true in "embed-certs-831240"
	W0731 21:00:00.761587  187862 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:00:00.761618  187862 host.go:66] Checking if "embed-certs-831240" exists ...
	I0731 21:00:00.762018  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.762070  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.775492  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42385
	I0731 21:00:00.776091  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.776712  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.776743  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.776760  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35295
	I0731 21:00:00.777245  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.777402  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.777513  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetState
	I0731 21:00:00.777920  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.777945  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.778185  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32907
	I0731 21:00:00.778393  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.778603  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetState
	I0731 21:00:00.778687  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.779223  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.779243  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.779665  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.779718  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 21:00:00.780231  187862 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:00:00.780274  187862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:00:00.780612  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 21:00:00.781947  187862 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:00:00.782994  187862 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 20:59:58.110503  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:00.112109  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:00.784194  187862 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:00:00.784216  187862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:00:00.784240  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 21:00:00.784937  187862 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:00:00.784958  187862 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:00:00.784984  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 21:00:00.788544  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.788947  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 21:00:00.788970  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.789003  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.789127  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 21:00:00.789389  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 21:00:00.789521  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 21:00:00.789548  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.789571  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 21:00:00.789773  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 21:00:00.790126  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 21:00:00.790324  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 21:00:00.790502  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 21:00:00.790663  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 21:00:00.799024  187862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35647
	I0731 21:00:00.799718  187862 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:00:00.800341  187862 main.go:141] libmachine: Using API Version  1
	I0731 21:00:00.800360  187862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:00:00.800967  187862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:00:00.801258  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetState
	I0731 21:00:00.803078  187862 main.go:141] libmachine: (embed-certs-831240) Calling .DriverName
	I0731 21:00:00.803555  187862 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:00:00.803571  187862 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:00:00.803591  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHHostname
	I0731 21:00:00.809363  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHPort
	I0731 21:00:00.809461  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.809492  187862 main.go:141] libmachine: (embed-certs-831240) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:69:a6", ip: ""} in network mk-embed-certs-831240: {Iface:virbr1 ExpiryTime:2024-07-31 21:59:35 +0000 UTC Type:0 Mac:52:54:00:ff:69:a6 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:embed-certs-831240 Clientid:01:52:54:00:ff:69:a6}
	I0731 21:00:00.809512  187862 main.go:141] libmachine: (embed-certs-831240) DBG | domain embed-certs-831240 has defined IP address 192.168.39.92 and MAC address 52:54:00:ff:69:a6 in network mk-embed-certs-831240
	I0731 21:00:00.809680  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHKeyPath
	I0731 21:00:00.809858  187862 main.go:141] libmachine: (embed-certs-831240) Calling .GetSSHUsername
	I0731 21:00:00.810032  187862 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/embed-certs-831240/id_rsa Username:docker}
	I0731 21:00:00.933963  187862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:00:00.953572  187862 node_ready.go:35] waiting up to 6m0s for node "embed-certs-831240" to be "Ready" ...
	I0731 21:00:01.036486  187862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:00:01.040636  187862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:00:01.040658  187862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 21:00:01.063384  187862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:00:01.068645  187862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:00:01.068675  187862 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:00:01.090838  187862 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:00:01.090861  187862 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:00:01.113173  187862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:00:02.099966  187862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.063427097s)
	I0731 21:00:02.100021  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.100035  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.100080  187862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.036657274s)
	I0731 21:00:02.100129  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.100146  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.100338  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.100441  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.100452  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.100461  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.100580  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.100605  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.100615  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.100623  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.100698  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.100709  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Closing plugin on server side
	I0731 21:00:02.100723  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.100866  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.100875  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Closing plugin on server side
	I0731 21:00:02.100882  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.107654  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.107688  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.107952  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.107968  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.108003  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Closing plugin on server side
	I0731 21:00:02.140031  187862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.026799248s)
	I0731 21:00:02.140100  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.140116  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.140424  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Closing plugin on server side
	I0731 21:00:02.140455  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.140470  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.140482  187862 main.go:141] libmachine: Making call to close driver server
	I0731 21:00:02.140494  187862 main.go:141] libmachine: (embed-certs-831240) Calling .Close
	I0731 21:00:02.140772  187862 main.go:141] libmachine: (embed-certs-831240) DBG | Closing plugin on server side
	I0731 21:00:02.140800  187862 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:00:02.140808  187862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:00:02.140817  187862 addons.go:475] Verifying addon metrics-server=true in "embed-certs-831240"
	I0731 21:00:02.142583  187862 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 21:00:02.143787  187862 addons.go:510] duration metric: took 1.405477731s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 20:59:58.910774  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:59.410697  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:59:59.911233  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:00.411170  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:00.911416  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:01.410979  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:01.911444  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:02.411537  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:02.911216  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:03.411386  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:00.089186  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:02.588315  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:02.610109  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:04.610324  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:02.958162  187862 node_ready.go:53] node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:05.458997  187862 node_ready.go:53] node "embed-certs-831240" has status "Ready":"False"
	I0731 21:00:03.910942  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:04.411505  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:04.911485  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:05.410763  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:05.910937  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:06.411216  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:06.910743  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:07.410941  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:07.910922  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:08.410593  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:04.589597  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:07.089475  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:09.090023  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:06.610390  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:09.110758  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:07.958154  187862 node_ready.go:49] node "embed-certs-831240" has status "Ready":"True"
	I0731 21:00:07.958180  187862 node_ready.go:38] duration metric: took 7.004576791s for node "embed-certs-831240" to be "Ready" ...
	I0731 21:00:07.958191  187862 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:00:07.969639  187862 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:07.974704  187862 pod_ready.go:92] pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:07.974733  187862 pod_ready.go:81] duration metric: took 5.064645ms for pod "coredns-7db6d8ff4d-2ks55" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:07.974745  187862 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:09.980566  187862 pod_ready.go:102] pod "etcd-embed-certs-831240" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:10.480476  187862 pod_ready.go:92] pod "etcd-embed-certs-831240" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:10.480501  187862 pod_ready.go:81] duration metric: took 2.505748029s for pod "etcd-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:10.480511  187862 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:10.485850  187862 pod_ready.go:92] pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:10.485873  187862 pod_ready.go:81] duration metric: took 5.353478ms for pod "kube-apiserver-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:10.485883  187862 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:08.910788  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:09.410807  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:09.911286  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:10.411372  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:10.910748  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:11.411253  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:11.910807  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:12.411208  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:12.910887  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:13.411318  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:11.589454  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:14.090483  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:11.610842  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:14.110306  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:12.492346  187862 pod_ready.go:102] pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:13.991859  187862 pod_ready.go:92] pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:13.991884  187862 pod_ready.go:81] duration metric: took 3.505993775s for pod "kube-controller-manager-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:13.991893  187862 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x662j" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:13.997932  187862 pod_ready.go:92] pod "kube-proxy-x662j" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:13.997961  187862 pod_ready.go:81] duration metric: took 6.060225ms for pod "kube-proxy-x662j" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:13.997974  187862 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:14.007155  187862 pod_ready.go:92] pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace has status "Ready":"True"
	I0731 21:00:14.007178  187862 pod_ready.go:81] duration metric: took 9.197289ms for pod "kube-scheduler-embed-certs-831240" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:14.007187  187862 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace to be "Ready" ...
	I0731 21:00:16.013417  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:13.910943  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:14.410728  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:14.911343  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:15.410545  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:15.910560  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:16.411117  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:16.910537  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:17.410761  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:17.910796  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:18.411138  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:16.589010  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:18.589215  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:16.609886  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:18.610209  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:20.611613  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:18.013504  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:20.513116  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:18.911394  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:19.411098  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:19.910629  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:20.410698  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:20.910760  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:21.410503  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:21.910582  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:22.410724  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:22.910792  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:23.410961  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:21.089938  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:23.588082  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:23.109996  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:25.110361  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:22.514254  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:24.514729  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:27.013263  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:23.910510  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:24.410725  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:24.910807  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:25.411543  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:25.911473  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:26.410494  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:26.910519  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:27.410950  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:27.911528  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:28.411350  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:25.589873  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:27.590134  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:27.612311  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:30.110116  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:29.014386  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:31.014534  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:28.911371  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:29.411269  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:29.911465  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:30.410633  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:30.911166  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:31.411184  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:31.910806  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:32.410806  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:32.911125  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:33.410942  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:33.411021  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:33.461204  188656 cri.go:89] found id: ""
	I0731 21:00:33.461232  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.461241  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:33.461249  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:33.461313  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:33.500898  188656 cri.go:89] found id: ""
	I0731 21:00:33.500927  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.500937  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:33.500944  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:33.501010  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:33.536865  188656 cri.go:89] found id: ""
	I0731 21:00:33.536889  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.536902  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:33.536908  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:33.536957  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:33.578540  188656 cri.go:89] found id: ""
	I0731 21:00:33.578570  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.578582  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:33.578595  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:33.578686  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:33.616242  188656 cri.go:89] found id: ""
	I0731 21:00:33.616266  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.616276  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:33.616283  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:33.616345  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:33.650436  188656 cri.go:89] found id: ""
	I0731 21:00:33.650468  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.650479  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:33.650487  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:33.650552  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:33.687256  188656 cri.go:89] found id: ""
	I0731 21:00:33.687288  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.687300  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:33.687308  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:33.687365  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:33.720381  188656 cri.go:89] found id: ""
	I0731 21:00:33.720428  188656 logs.go:276] 0 containers: []
	W0731 21:00:33.720440  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:33.720453  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:33.720469  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:33.772182  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:33.772226  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:33.787323  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:33.787359  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:00:30.089778  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:32.587877  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:32.110769  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:34.610418  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:33.514142  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:36.013676  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	W0731 21:00:33.907858  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:33.907878  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:33.907892  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:33.974118  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:33.974157  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:36.513427  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:36.527531  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:36.527588  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:36.567679  188656 cri.go:89] found id: ""
	I0731 21:00:36.567706  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.567714  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:36.567726  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:36.567786  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:36.608106  188656 cri.go:89] found id: ""
	I0731 21:00:36.608134  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.608145  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:36.608153  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:36.608215  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:36.651783  188656 cri.go:89] found id: ""
	I0731 21:00:36.651815  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.651824  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:36.651830  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:36.651892  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:36.686716  188656 cri.go:89] found id: ""
	I0731 21:00:36.686743  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.686751  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:36.686758  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:36.686823  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:36.721823  188656 cri.go:89] found id: ""
	I0731 21:00:36.721857  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.721865  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:36.721871  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:36.721939  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:36.758060  188656 cri.go:89] found id: ""
	I0731 21:00:36.758093  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.758103  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:36.758112  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:36.758173  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:36.801667  188656 cri.go:89] found id: ""
	I0731 21:00:36.801694  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.801704  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:36.801712  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:36.801776  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:36.845084  188656 cri.go:89] found id: ""
	I0731 21:00:36.845113  188656 logs.go:276] 0 containers: []
	W0731 21:00:36.845124  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:36.845137  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:36.845152  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:36.897208  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:36.897248  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:36.910716  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:36.910750  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:36.987259  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:36.987285  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:36.987304  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:37.061109  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:37.061144  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:34.589416  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:36.592841  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:39.088346  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:36.611386  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:39.111149  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:38.516701  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:41.017409  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:39.600847  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:39.615897  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:39.615957  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:39.655390  188656 cri.go:89] found id: ""
	I0731 21:00:39.655417  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.655424  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:39.655430  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:39.655502  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:39.694180  188656 cri.go:89] found id: ""
	I0731 21:00:39.694213  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.694224  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:39.694231  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:39.694300  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:39.736752  188656 cri.go:89] found id: ""
	I0731 21:00:39.736783  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.736793  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:39.736801  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:39.736860  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:39.775685  188656 cri.go:89] found id: ""
	I0731 21:00:39.775770  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.775790  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:39.775802  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:39.775871  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:39.816790  188656 cri.go:89] found id: ""
	I0731 21:00:39.816820  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.816829  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:39.816835  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:39.816886  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:39.854931  188656 cri.go:89] found id: ""
	I0731 21:00:39.854963  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.854973  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:39.854981  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:39.855045  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:39.891039  188656 cri.go:89] found id: ""
	I0731 21:00:39.891066  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.891074  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:39.891083  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:39.891136  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:39.927434  188656 cri.go:89] found id: ""
	I0731 21:00:39.927463  188656 logs.go:276] 0 containers: []
	W0731 21:00:39.927473  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:39.927483  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:39.927496  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:39.941240  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:39.941272  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:40.017212  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:40.017233  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:40.017246  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:40.094047  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:40.094081  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:40.138940  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:40.138966  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:42.690818  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:42.704855  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:42.704931  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:42.752315  188656 cri.go:89] found id: ""
	I0731 21:00:42.752347  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.752368  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:42.752376  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:42.752445  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:42.790060  188656 cri.go:89] found id: ""
	I0731 21:00:42.790090  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.790101  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:42.790109  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:42.790220  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:42.825504  188656 cri.go:89] found id: ""
	I0731 21:00:42.825532  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.825540  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:42.825547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:42.825598  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:42.860157  188656 cri.go:89] found id: ""
	I0731 21:00:42.860193  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.860204  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:42.860213  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:42.860286  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:42.902914  188656 cri.go:89] found id: ""
	I0731 21:00:42.902947  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.902959  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:42.902967  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:42.903036  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:42.950503  188656 cri.go:89] found id: ""
	I0731 21:00:42.950532  188656 logs.go:276] 0 containers: []
	W0731 21:00:42.950541  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:42.950550  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:42.950603  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:43.010232  188656 cri.go:89] found id: ""
	I0731 21:00:43.010261  188656 logs.go:276] 0 containers: []
	W0731 21:00:43.010272  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:43.010280  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:43.010344  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:43.045487  188656 cri.go:89] found id: ""
	I0731 21:00:43.045517  188656 logs.go:276] 0 containers: []
	W0731 21:00:43.045527  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:43.045539  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:43.045556  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:43.123248  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:43.123279  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:43.123296  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:43.212230  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:43.212272  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:43.254595  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:43.254626  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:43.306187  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:43.306227  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:41.589806  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:44.088126  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:41.611786  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:44.109436  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:43.513500  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:45.514161  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:45.820246  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:45.835707  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:45.835786  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:45.872079  188656 cri.go:89] found id: ""
	I0731 21:00:45.872110  188656 logs.go:276] 0 containers: []
	W0731 21:00:45.872122  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:45.872130  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:45.872196  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:45.910637  188656 cri.go:89] found id: ""
	I0731 21:00:45.910664  188656 logs.go:276] 0 containers: []
	W0731 21:00:45.910672  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:45.910678  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:45.910740  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:45.945316  188656 cri.go:89] found id: ""
	I0731 21:00:45.945360  188656 logs.go:276] 0 containers: []
	W0731 21:00:45.945372  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:45.945380  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:45.945455  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:45.982015  188656 cri.go:89] found id: ""
	I0731 21:00:45.982046  188656 logs.go:276] 0 containers: []
	W0731 21:00:45.982057  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:45.982096  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:45.982165  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:46.017359  188656 cri.go:89] found id: ""
	I0731 21:00:46.017392  188656 logs.go:276] 0 containers: []
	W0731 21:00:46.017404  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:46.017412  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:46.017478  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:46.054401  188656 cri.go:89] found id: ""
	I0731 21:00:46.054431  188656 logs.go:276] 0 containers: []
	W0731 21:00:46.054447  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:46.054454  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:46.054507  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:46.092107  188656 cri.go:89] found id: ""
	I0731 21:00:46.092130  188656 logs.go:276] 0 containers: []
	W0731 21:00:46.092137  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:46.092143  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:46.092190  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:46.128613  188656 cri.go:89] found id: ""
	I0731 21:00:46.128642  188656 logs.go:276] 0 containers: []
	W0731 21:00:46.128652  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:46.128665  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:46.128679  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:46.144539  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:46.144570  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:46.219399  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:46.219433  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:46.219448  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:46.304486  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:46.304529  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:46.344087  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:46.344121  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:46.090543  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:48.090607  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:46.111072  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:48.610316  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:50.611553  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:48.014287  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:50.513252  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:48.894728  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:48.916610  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:48.916675  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:48.978515  188656 cri.go:89] found id: ""
	I0731 21:00:48.978543  188656 logs.go:276] 0 containers: []
	W0731 21:00:48.978550  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:48.978557  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:48.978615  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:49.026224  188656 cri.go:89] found id: ""
	I0731 21:00:49.026257  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.026268  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:49.026276  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:49.026354  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:49.064967  188656 cri.go:89] found id: ""
	I0731 21:00:49.064994  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.065003  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:49.065010  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:49.065070  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:49.101966  188656 cri.go:89] found id: ""
	I0731 21:00:49.101990  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.101999  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:49.102004  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:49.102056  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:49.137775  188656 cri.go:89] found id: ""
	I0731 21:00:49.137801  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.137809  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:49.137815  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:49.137867  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:49.173778  188656 cri.go:89] found id: ""
	I0731 21:00:49.173824  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.173832  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:49.173839  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:49.173908  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:49.207211  188656 cri.go:89] found id: ""
	I0731 21:00:49.207239  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.207247  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:49.207254  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:49.207333  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:49.244126  188656 cri.go:89] found id: ""
	I0731 21:00:49.244159  188656 logs.go:276] 0 containers: []
	W0731 21:00:49.244180  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:49.244202  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:49.244221  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:49.299606  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:49.299646  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:49.314093  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:49.314121  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:49.384691  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:49.384712  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:49.384728  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:49.464425  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:49.464462  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:52.005670  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:52.019617  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:52.019705  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:52.053452  188656 cri.go:89] found id: ""
	I0731 21:00:52.053485  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.053494  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:52.053500  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:52.053552  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:52.094462  188656 cri.go:89] found id: ""
	I0731 21:00:52.094495  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.094504  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:52.094510  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:52.094572  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:52.134555  188656 cri.go:89] found id: ""
	I0731 21:00:52.134584  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.134595  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:52.134602  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:52.134676  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:52.168805  188656 cri.go:89] found id: ""
	I0731 21:00:52.168851  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.168863  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:52.168871  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:52.168939  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:52.203093  188656 cri.go:89] found id: ""
	I0731 21:00:52.203121  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.203132  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:52.203140  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:52.203213  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:52.237816  188656 cri.go:89] found id: ""
	I0731 21:00:52.237842  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.237850  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:52.237857  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:52.237906  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:52.272136  188656 cri.go:89] found id: ""
	I0731 21:00:52.272175  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.272194  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:52.272202  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:52.272261  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:52.306616  188656 cri.go:89] found id: ""
	I0731 21:00:52.306641  188656 logs.go:276] 0 containers: []
	W0731 21:00:52.306649  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:52.306659  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:52.306671  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:52.372668  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:52.372690  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:52.372707  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:52.457752  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:52.457794  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:52.496087  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:52.496129  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:52.548137  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:52.548176  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:50.588204  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:53.089737  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:53.110034  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:55.110293  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:52.514848  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:55.013623  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:57.015221  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:55.063463  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:55.076922  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:55.077005  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:55.117479  188656 cri.go:89] found id: ""
	I0731 21:00:55.117511  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.117523  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:55.117531  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:55.117595  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:55.156311  188656 cri.go:89] found id: ""
	I0731 21:00:55.156339  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.156348  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:55.156354  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:55.156421  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:55.196778  188656 cri.go:89] found id: ""
	I0731 21:00:55.196807  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.196818  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:55.196826  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:55.196898  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:55.237575  188656 cri.go:89] found id: ""
	I0731 21:00:55.237605  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.237614  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:55.237620  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:55.237672  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:55.271717  188656 cri.go:89] found id: ""
	I0731 21:00:55.271746  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.271754  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:55.271760  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:55.271811  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:55.307586  188656 cri.go:89] found id: ""
	I0731 21:00:55.307618  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.307630  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:55.307637  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:55.307708  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:55.343325  188656 cri.go:89] found id: ""
	I0731 21:00:55.343352  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.343361  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:55.343367  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:55.343418  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:55.378959  188656 cri.go:89] found id: ""
	I0731 21:00:55.378988  188656 logs.go:276] 0 containers: []
	W0731 21:00:55.378997  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:55.379008  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:55.379021  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:55.454213  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:55.454243  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:55.454260  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:55.532802  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:55.532839  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:55.575903  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:55.575940  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:55.635105  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:55.635140  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:58.149801  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:00:58.162682  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:00:58.162743  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:00:58.196220  188656 cri.go:89] found id: ""
	I0731 21:00:58.196245  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.196254  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:00:58.196260  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:00:58.196313  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:00:58.231052  188656 cri.go:89] found id: ""
	I0731 21:00:58.231083  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.231093  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:00:58.231099  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:00:58.231156  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:00:58.265569  188656 cri.go:89] found id: ""
	I0731 21:00:58.265599  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.265612  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:00:58.265633  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:00:58.265695  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:00:58.300750  188656 cri.go:89] found id: ""
	I0731 21:00:58.300779  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.300788  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:00:58.300793  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:00:58.300869  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:00:58.333920  188656 cri.go:89] found id: ""
	I0731 21:00:58.333949  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.333958  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:00:58.333963  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:00:58.334015  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:00:58.368732  188656 cri.go:89] found id: ""
	I0731 21:00:58.368759  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.368771  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:00:58.368787  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:00:58.368855  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:00:58.408454  188656 cri.go:89] found id: ""
	I0731 21:00:58.408488  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.408501  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:00:58.408510  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:00:58.408575  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:00:58.445855  188656 cri.go:89] found id: ""
	I0731 21:00:58.445888  188656 logs.go:276] 0 containers: []
	W0731 21:00:58.445900  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:00:58.445913  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:00:58.445934  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:00:58.496144  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:00:58.496177  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:00:58.510708  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:00:58.510743  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:00:58.580690  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:00:58.580712  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:00:58.580725  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:00:58.657281  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:00:58.657320  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:00:55.591068  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:58.088264  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:57.610282  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:59.611376  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:00:59.017831  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:01.514115  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:01.196374  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:01.209044  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:01.209111  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:01.247313  188656 cri.go:89] found id: ""
	I0731 21:01:01.247343  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.247353  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:01.247360  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:01.247443  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:01.282269  188656 cri.go:89] found id: ""
	I0731 21:01:01.282300  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.282308  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:01.282314  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:01.282370  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:01.315598  188656 cri.go:89] found id: ""
	I0731 21:01:01.315628  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.315638  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:01.315644  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:01.315697  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:01.352492  188656 cri.go:89] found id: ""
	I0731 21:01:01.352521  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.352533  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:01.352540  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:01.352605  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:01.387858  188656 cri.go:89] found id: ""
	I0731 21:01:01.387885  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.387894  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:01.387900  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:01.387950  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:01.425014  188656 cri.go:89] found id: ""
	I0731 21:01:01.425042  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.425052  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:01.425061  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:01.425129  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:01.463068  188656 cri.go:89] found id: ""
	I0731 21:01:01.463098  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.463107  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:01.463113  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:01.463171  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:01.500174  188656 cri.go:89] found id: ""
	I0731 21:01:01.500203  188656 logs.go:276] 0 containers: []
	W0731 21:01:01.500214  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:01.500229  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:01.500244  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:01.554350  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:01.554389  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:01.569353  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:01.569394  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:01.641074  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:01.641095  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:01.641108  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:01.722340  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:01.722377  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:00.088915  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:02.089981  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:02.109888  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:04.109951  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:04.015302  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:06.513535  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:04.264035  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:04.278374  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:04.278441  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:04.314037  188656 cri.go:89] found id: ""
	I0731 21:01:04.314068  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.314079  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:04.314087  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:04.314159  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:04.347604  188656 cri.go:89] found id: ""
	I0731 21:01:04.347635  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.347646  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:04.347653  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:04.347718  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:04.382412  188656 cri.go:89] found id: ""
	I0731 21:01:04.382442  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.382454  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:04.382462  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:04.382516  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:04.419097  188656 cri.go:89] found id: ""
	I0731 21:01:04.419130  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.419142  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:04.419150  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:04.419209  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:04.464561  188656 cri.go:89] found id: ""
	I0731 21:01:04.464592  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.464601  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:04.464607  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:04.464683  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:04.500484  188656 cri.go:89] found id: ""
	I0731 21:01:04.500510  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.500518  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:04.500524  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:04.500577  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:04.536211  188656 cri.go:89] found id: ""
	I0731 21:01:04.536239  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.536250  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:04.536257  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:04.536324  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:04.569521  188656 cri.go:89] found id: ""
	I0731 21:01:04.569548  188656 logs.go:276] 0 containers: []
	W0731 21:01:04.569556  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:04.569567  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:04.569583  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:04.621228  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:04.621261  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:04.637500  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:04.637527  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:04.710577  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:04.710606  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:04.710623  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:04.788305  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:04.788343  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:07.329209  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:07.343021  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:07.343089  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:07.378556  188656 cri.go:89] found id: ""
	I0731 21:01:07.378588  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.378603  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:07.378610  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:07.378679  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:07.416419  188656 cri.go:89] found id: ""
	I0731 21:01:07.416455  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.416467  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:07.416474  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:07.416538  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:07.454720  188656 cri.go:89] found id: ""
	I0731 21:01:07.454749  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.454758  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:07.454764  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:07.454815  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:07.488963  188656 cri.go:89] found id: ""
	I0731 21:01:07.488995  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.489004  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:07.489009  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:07.489060  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:07.531916  188656 cri.go:89] found id: ""
	I0731 21:01:07.531949  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.531961  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:07.531967  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:07.532019  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:07.569233  188656 cri.go:89] found id: ""
	I0731 21:01:07.569266  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.569275  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:07.569281  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:07.569350  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:07.606318  188656 cri.go:89] found id: ""
	I0731 21:01:07.606349  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.606360  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:07.606368  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:07.606442  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:07.641408  188656 cri.go:89] found id: ""
	I0731 21:01:07.641436  188656 logs.go:276] 0 containers: []
	W0731 21:01:07.641445  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:07.641454  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:07.641466  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:07.681094  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:07.681123  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:07.734600  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:07.734641  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:07.748747  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:07.748779  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:07.821775  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:07.821799  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:07.821816  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:04.590174  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:07.089655  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:06.110694  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:08.610381  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:10.611128  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:09.013688  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:11.513361  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:10.399973  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:10.412908  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:10.412986  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:10.448866  188656 cri.go:89] found id: ""
	I0731 21:01:10.448895  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.448903  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:10.448909  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:10.448966  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:10.486309  188656 cri.go:89] found id: ""
	I0731 21:01:10.486338  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.486346  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:10.486352  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:10.486411  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:10.522834  188656 cri.go:89] found id: ""
	I0731 21:01:10.522856  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.522863  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:10.522870  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:10.522929  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:10.558272  188656 cri.go:89] found id: ""
	I0731 21:01:10.558304  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.558324  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:10.558330  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:10.558391  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:10.596560  188656 cri.go:89] found id: ""
	I0731 21:01:10.596589  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.596600  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:10.596608  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:10.596668  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:10.633488  188656 cri.go:89] found id: ""
	I0731 21:01:10.633518  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.633529  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:10.633537  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:10.633597  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:10.665779  188656 cri.go:89] found id: ""
	I0731 21:01:10.665812  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.665824  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:10.665832  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:10.665895  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:10.700526  188656 cri.go:89] found id: ""
	I0731 21:01:10.700556  188656 logs.go:276] 0 containers: []
	W0731 21:01:10.700564  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:10.700575  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:10.700587  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:10.753507  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:10.753550  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:10.768056  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:10.768089  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:10.842120  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:10.842142  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:10.842159  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:10.916532  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:10.916565  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:13.456826  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:13.471064  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:13.471130  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:13.505660  188656 cri.go:89] found id: ""
	I0731 21:01:13.505694  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.505707  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:13.505713  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:13.505775  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:13.543084  188656 cri.go:89] found id: ""
	I0731 21:01:13.543109  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.543117  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:13.543123  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:13.543182  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:13.578940  188656 cri.go:89] found id: ""
	I0731 21:01:13.578966  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.578974  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:13.578981  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:13.579047  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:13.617710  188656 cri.go:89] found id: ""
	I0731 21:01:13.617733  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.617740  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:13.617747  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:13.617810  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:13.653535  188656 cri.go:89] found id: ""
	I0731 21:01:13.653567  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.653579  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:13.653587  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:13.653658  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:13.687914  188656 cri.go:89] found id: ""
	I0731 21:01:13.687942  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.687953  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:13.687960  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:13.688031  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:13.725242  188656 cri.go:89] found id: ""
	I0731 21:01:13.725278  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.725287  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:13.725293  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:13.725372  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:13.760890  188656 cri.go:89] found id: ""
	I0731 21:01:13.760918  188656 logs.go:276] 0 containers: []
	W0731 21:01:13.760929  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:13.760943  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:13.760958  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:13.810212  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:13.810252  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:13.824229  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:13.824259  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:01:09.588945  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:12.088514  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:14.088684  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:13.109760  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:15.109938  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:13.515603  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:16.013268  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	W0731 21:01:13.895306  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:13.895331  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:13.895344  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:13.976366  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:13.976411  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:16.520165  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:16.533970  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:16.534035  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:16.571444  188656 cri.go:89] found id: ""
	I0731 21:01:16.571474  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.571482  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:16.571488  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:16.571539  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:16.608150  188656 cri.go:89] found id: ""
	I0731 21:01:16.608176  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.608186  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:16.608194  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:16.608254  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:16.643252  188656 cri.go:89] found id: ""
	I0731 21:01:16.643283  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.643294  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:16.643302  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:16.643363  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:16.679521  188656 cri.go:89] found id: ""
	I0731 21:01:16.679552  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.679563  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:16.679571  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:16.679624  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:16.713502  188656 cri.go:89] found id: ""
	I0731 21:01:16.713532  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.713541  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:16.713547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:16.713624  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:16.748276  188656 cri.go:89] found id: ""
	I0731 21:01:16.748309  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.748318  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:16.748324  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:16.748383  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:16.783895  188656 cri.go:89] found id: ""
	I0731 21:01:16.783929  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.783940  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:16.783948  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:16.784014  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:16.817362  188656 cri.go:89] found id: ""
	I0731 21:01:16.817392  188656 logs.go:276] 0 containers: []
	W0731 21:01:16.817415  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:16.817425  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:16.817440  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:16.872584  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:16.872637  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:16.887240  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:16.887275  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:16.961920  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:16.961949  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:16.961967  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:17.041889  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:17.041924  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:16.089420  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:18.089611  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:17.110442  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:19.111424  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:18.013772  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:20.514737  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:19.585935  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:19.600389  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:19.600475  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:19.635883  188656 cri.go:89] found id: ""
	I0731 21:01:19.635913  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.635924  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:19.635932  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:19.635995  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:19.674413  188656 cri.go:89] found id: ""
	I0731 21:01:19.674441  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.674459  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:19.674471  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:19.674538  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:19.708181  188656 cri.go:89] found id: ""
	I0731 21:01:19.708211  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.708219  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:19.708224  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:19.708292  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:19.744737  188656 cri.go:89] found id: ""
	I0731 21:01:19.744774  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.744783  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:19.744791  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:19.744849  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:19.784366  188656 cri.go:89] found id: ""
	I0731 21:01:19.784398  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.784406  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:19.784412  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:19.784465  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:19.819234  188656 cri.go:89] found id: ""
	I0731 21:01:19.819269  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.819280  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:19.819289  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:19.819355  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:19.851462  188656 cri.go:89] found id: ""
	I0731 21:01:19.851494  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.851503  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:19.851510  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:19.851563  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:19.896575  188656 cri.go:89] found id: ""
	I0731 21:01:19.896604  188656 logs.go:276] 0 containers: []
	W0731 21:01:19.896612  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:19.896624  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:19.896640  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:19.952239  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:19.952284  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:19.969411  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:19.969442  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:20.042820  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:20.042847  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:20.042863  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:20.130070  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:20.130115  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:22.674956  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:22.688548  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:22.688616  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:22.728750  188656 cri.go:89] found id: ""
	I0731 21:01:22.728775  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.728784  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:22.728790  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:22.728844  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:22.763765  188656 cri.go:89] found id: ""
	I0731 21:01:22.763793  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.763801  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:22.763807  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:22.763858  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:22.799134  188656 cri.go:89] found id: ""
	I0731 21:01:22.799163  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.799172  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:22.799178  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:22.799237  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:22.833972  188656 cri.go:89] found id: ""
	I0731 21:01:22.833998  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.834005  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:22.834011  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:22.834060  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:22.869686  188656 cri.go:89] found id: ""
	I0731 21:01:22.869711  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.869719  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:22.869724  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:22.869776  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:22.907919  188656 cri.go:89] found id: ""
	I0731 21:01:22.907950  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.907961  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:22.907969  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:22.908035  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:22.947162  188656 cri.go:89] found id: ""
	I0731 21:01:22.947192  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.947204  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:22.947212  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:22.947273  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:22.992822  188656 cri.go:89] found id: ""
	I0731 21:01:22.992860  188656 logs.go:276] 0 containers: []
	W0731 21:01:22.992872  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:22.992884  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:22.992900  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:23.045552  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:23.045589  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:23.059895  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:23.059925  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:23.135535  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:23.135561  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:23.135577  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:23.217468  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:23.217521  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:20.588507  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:22.588759  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:21.611467  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:24.110813  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:22.514805  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:25.012583  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:27.013095  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:25.771615  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:25.785037  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:25.785115  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:25.821070  188656 cri.go:89] found id: ""
	I0731 21:01:25.821100  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.821112  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:25.821120  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:25.821176  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:25.856174  188656 cri.go:89] found id: ""
	I0731 21:01:25.856206  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.856217  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:25.856225  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:25.856288  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:25.889440  188656 cri.go:89] found id: ""
	I0731 21:01:25.889473  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.889483  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:25.889490  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:25.889546  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:25.924770  188656 cri.go:89] found id: ""
	I0731 21:01:25.924796  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.924804  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:25.924811  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:25.924860  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:25.963529  188656 cri.go:89] found id: ""
	I0731 21:01:25.963576  188656 logs.go:276] 0 containers: []
	W0731 21:01:25.963588  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:25.963595  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:25.963670  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:26.000033  188656 cri.go:89] found id: ""
	I0731 21:01:26.000060  188656 logs.go:276] 0 containers: []
	W0731 21:01:26.000069  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:26.000076  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:26.000133  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:26.035310  188656 cri.go:89] found id: ""
	I0731 21:01:26.035341  188656 logs.go:276] 0 containers: []
	W0731 21:01:26.035353  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:26.035359  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:26.035423  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:26.070096  188656 cri.go:89] found id: ""
	I0731 21:01:26.070119  188656 logs.go:276] 0 containers: []
	W0731 21:01:26.070127  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:26.070138  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:26.070149  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:26.141198  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:26.141220  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:26.141237  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:26.219766  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:26.219805  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:26.264836  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:26.264864  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:26.316672  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:26.316709  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:28.832882  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:24.588907  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:27.088961  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:29.089538  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:26.111336  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:28.609453  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:30.610379  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:29.014929  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:31.512827  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:28.846243  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:28.846307  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:28.880312  188656 cri.go:89] found id: ""
	I0731 21:01:28.880339  188656 logs.go:276] 0 containers: []
	W0731 21:01:28.880350  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:28.880358  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:28.880419  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:28.914625  188656 cri.go:89] found id: ""
	I0731 21:01:28.914652  188656 logs.go:276] 0 containers: []
	W0731 21:01:28.914660  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:28.914667  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:28.914726  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:28.949138  188656 cri.go:89] found id: ""
	I0731 21:01:28.949173  188656 logs.go:276] 0 containers: []
	W0731 21:01:28.949185  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:28.949192  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:28.949264  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:28.985229  188656 cri.go:89] found id: ""
	I0731 21:01:28.985258  188656 logs.go:276] 0 containers: []
	W0731 21:01:28.985266  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:28.985272  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:28.985326  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:29.021520  188656 cri.go:89] found id: ""
	I0731 21:01:29.021550  188656 logs.go:276] 0 containers: []
	W0731 21:01:29.021562  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:29.021568  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:29.021629  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:29.058639  188656 cri.go:89] found id: ""
	I0731 21:01:29.058671  188656 logs.go:276] 0 containers: []
	W0731 21:01:29.058682  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:29.058690  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:29.058755  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:29.105435  188656 cri.go:89] found id: ""
	I0731 21:01:29.105458  188656 logs.go:276] 0 containers: []
	W0731 21:01:29.105466  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:29.105472  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:29.105528  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:29.147118  188656 cri.go:89] found id: ""
	I0731 21:01:29.147144  188656 logs.go:276] 0 containers: []
	W0731 21:01:29.147152  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:29.147161  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:29.147177  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:29.231698  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:29.231735  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:29.276163  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:29.276200  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:29.330551  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:29.330589  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:29.350293  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:29.350323  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:29.456073  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:31.956964  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:31.970712  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:31.970780  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:32.009546  188656 cri.go:89] found id: ""
	I0731 21:01:32.009574  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.009585  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:32.009593  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:32.009674  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:32.046622  188656 cri.go:89] found id: ""
	I0731 21:01:32.046661  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.046672  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:32.046680  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:32.046748  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:32.080958  188656 cri.go:89] found id: ""
	I0731 21:01:32.080985  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.080993  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:32.080998  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:32.081052  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:32.117454  188656 cri.go:89] found id: ""
	I0731 21:01:32.117480  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.117489  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:32.117495  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:32.117561  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:32.152335  188656 cri.go:89] found id: ""
	I0731 21:01:32.152369  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.152380  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:32.152387  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:32.152441  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:32.186631  188656 cri.go:89] found id: ""
	I0731 21:01:32.186670  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.186682  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:32.186691  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:32.186761  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:32.221496  188656 cri.go:89] found id: ""
	I0731 21:01:32.221533  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.221544  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:32.221551  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:32.221632  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:32.256315  188656 cri.go:89] found id: ""
	I0731 21:01:32.256341  188656 logs.go:276] 0 containers: []
	W0731 21:01:32.256350  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:32.256360  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:32.256372  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:32.295759  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:32.295788  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:32.347855  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:32.347888  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:32.360982  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:32.361012  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:32.433900  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:32.433926  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:32.433947  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:31.588474  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:33.590513  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:32.610672  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:35.110698  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:33.514600  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:36.013157  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:35.013369  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:35.027203  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:35.027298  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:35.065567  188656 cri.go:89] found id: ""
	I0731 21:01:35.065599  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.065610  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:35.065617  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:35.065686  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:35.104285  188656 cri.go:89] found id: ""
	I0731 21:01:35.104317  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.104328  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:35.104335  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:35.104430  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:35.151081  188656 cri.go:89] found id: ""
	I0731 21:01:35.151108  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.151119  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:35.151127  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:35.151190  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:35.196844  188656 cri.go:89] found id: ""
	I0731 21:01:35.196875  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.196886  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:35.196894  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:35.196964  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:35.253581  188656 cri.go:89] found id: ""
	I0731 21:01:35.253612  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.253623  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:35.253630  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:35.253703  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:35.295791  188656 cri.go:89] found id: ""
	I0731 21:01:35.295819  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.295830  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:35.295838  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:35.295904  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:35.329405  188656 cri.go:89] found id: ""
	I0731 21:01:35.329441  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.329454  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:35.329462  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:35.329526  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:35.363976  188656 cri.go:89] found id: ""
	I0731 21:01:35.364009  188656 logs.go:276] 0 containers: []
	W0731 21:01:35.364022  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:35.364035  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:35.364051  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:35.421213  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:35.421253  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:35.436612  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:35.436646  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:35.514154  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:35.514182  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:35.514197  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:35.588048  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:35.588082  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:38.133466  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:38.147071  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:38.147142  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:38.179992  188656 cri.go:89] found id: ""
	I0731 21:01:38.180024  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.180036  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:38.180044  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:38.180116  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:38.213784  188656 cri.go:89] found id: ""
	I0731 21:01:38.213816  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.213827  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:38.213834  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:38.213901  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:38.254190  188656 cri.go:89] found id: ""
	I0731 21:01:38.254220  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.254229  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:38.254235  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:38.254284  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:38.289695  188656 cri.go:89] found id: ""
	I0731 21:01:38.289732  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.289743  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:38.289751  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:38.289819  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:38.327743  188656 cri.go:89] found id: ""
	I0731 21:01:38.327777  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.327788  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:38.327797  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:38.327853  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:38.361373  188656 cri.go:89] found id: ""
	I0731 21:01:38.361409  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.361421  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:38.361428  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:38.361501  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:38.396832  188656 cri.go:89] found id: ""
	I0731 21:01:38.396860  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.396868  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:38.396873  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:38.396923  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:38.431822  188656 cri.go:89] found id: ""
	I0731 21:01:38.431855  188656 logs.go:276] 0 containers: []
	W0731 21:01:38.431868  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:38.431880  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:38.431895  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:38.481994  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:38.482028  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:38.495885  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:38.495911  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:38.563384  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:38.563411  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:38.563437  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:38.646806  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:38.646848  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:36.089465  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:38.590301  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:37.611057  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:40.110731  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:38.015769  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:40.513690  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:41.187323  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:41.200995  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:41.201063  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:41.241620  188656 cri.go:89] found id: ""
	I0731 21:01:41.241651  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.241663  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:41.241671  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:41.241745  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:41.279565  188656 cri.go:89] found id: ""
	I0731 21:01:41.279595  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.279604  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:41.279609  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:41.279666  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:41.320710  188656 cri.go:89] found id: ""
	I0731 21:01:41.320744  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.320755  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:41.320763  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:41.320834  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:41.356428  188656 cri.go:89] found id: ""
	I0731 21:01:41.356460  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.356472  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:41.356480  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:41.356544  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:41.390493  188656 cri.go:89] found id: ""
	I0731 21:01:41.390525  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.390536  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:41.390544  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:41.390612  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:41.424244  188656 cri.go:89] found id: ""
	I0731 21:01:41.424271  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.424282  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:41.424290  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:41.424350  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:41.459916  188656 cri.go:89] found id: ""
	I0731 21:01:41.459946  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.459955  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:41.459961  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:41.460012  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:41.493891  188656 cri.go:89] found id: ""
	I0731 21:01:41.493917  188656 logs.go:276] 0 containers: []
	W0731 21:01:41.493926  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:41.493936  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:41.493950  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:41.544066  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:41.544106  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:41.558504  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:41.558534  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:41.632996  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:41.633021  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:41.633039  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:41.712637  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:41.712677  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:41.087979  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:43.088834  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:42.610136  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:45.109986  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:42.514059  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:44.514535  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:47.014970  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:44.255947  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:44.268961  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:44.269050  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:44.304621  188656 cri.go:89] found id: ""
	I0731 21:01:44.304656  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.304668  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:44.304676  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:44.304732  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:44.339389  188656 cri.go:89] found id: ""
	I0731 21:01:44.339429  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.339441  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:44.339448  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:44.339510  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:44.373069  188656 cri.go:89] found id: ""
	I0731 21:01:44.373095  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.373103  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:44.373110  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:44.373179  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:44.408784  188656 cri.go:89] found id: ""
	I0731 21:01:44.408812  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.408821  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:44.408829  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:44.408896  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:44.445636  188656 cri.go:89] found id: ""
	I0731 21:01:44.445671  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.445682  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:44.445690  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:44.445759  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:44.483529  188656 cri.go:89] found id: ""
	I0731 21:01:44.483565  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.483577  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:44.483585  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:44.483643  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:44.517959  188656 cri.go:89] found id: ""
	I0731 21:01:44.517980  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.517987  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:44.517993  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:44.518042  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:44.552322  188656 cri.go:89] found id: ""
	I0731 21:01:44.552367  188656 logs.go:276] 0 containers: []
	W0731 21:01:44.552392  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:44.552405  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:44.552421  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:44.625005  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:44.625030  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:44.625043  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:44.702547  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:44.702585  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:44.741754  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:44.741792  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:44.795179  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:44.795216  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:47.309995  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:47.323993  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:47.324076  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:47.365546  188656 cri.go:89] found id: ""
	I0731 21:01:47.365576  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.365587  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:47.365595  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:47.365682  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:47.402774  188656 cri.go:89] found id: ""
	I0731 21:01:47.402810  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.402822  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:47.402831  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:47.402899  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:47.440716  188656 cri.go:89] found id: ""
	I0731 21:01:47.440746  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.440755  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:47.440761  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:47.440811  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:47.479418  188656 cri.go:89] found id: ""
	I0731 21:01:47.479450  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.479461  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:47.479469  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:47.479535  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:47.514027  188656 cri.go:89] found id: ""
	I0731 21:01:47.514065  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.514074  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:47.514081  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:47.514149  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:47.550178  188656 cri.go:89] found id: ""
	I0731 21:01:47.550203  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.550212  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:47.550218  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:47.550271  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:47.587844  188656 cri.go:89] found id: ""
	I0731 21:01:47.587873  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.587883  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:47.587891  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:47.587945  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:47.627581  188656 cri.go:89] found id: ""
	I0731 21:01:47.627608  188656 logs.go:276] 0 containers: []
	W0731 21:01:47.627620  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:47.627633  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:47.627647  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:47.683364  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:47.683408  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:47.697882  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:47.697917  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:47.773804  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:47.773834  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:47.773848  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:47.859356  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:47.859404  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:45.090199  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:47.091328  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:47.610631  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:50.109476  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:49.514186  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:52.013486  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:50.402403  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:50.417269  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:50.417332  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:50.452762  188656 cri.go:89] found id: ""
	I0731 21:01:50.452786  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.452793  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:50.452799  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:50.452852  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:50.486741  188656 cri.go:89] found id: ""
	I0731 21:01:50.486771  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.486782  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:50.486789  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:50.486855  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:50.526144  188656 cri.go:89] found id: ""
	I0731 21:01:50.526174  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.526185  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:50.526193  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:50.526246  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:50.560957  188656 cri.go:89] found id: ""
	I0731 21:01:50.560985  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.560995  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:50.561003  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:50.561065  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:50.597228  188656 cri.go:89] found id: ""
	I0731 21:01:50.597258  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.597269  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:50.597275  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:50.597357  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:50.638153  188656 cri.go:89] found id: ""
	I0731 21:01:50.638183  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.638199  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:50.638208  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:50.638270  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:50.672236  188656 cri.go:89] found id: ""
	I0731 21:01:50.672266  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.672274  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:50.672280  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:50.672340  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:50.704069  188656 cri.go:89] found id: ""
	I0731 21:01:50.704093  188656 logs.go:276] 0 containers: []
	W0731 21:01:50.704102  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:50.704112  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:50.704125  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:50.757973  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:50.758010  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:50.771203  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:50.771229  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:50.842937  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:50.842956  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:50.842969  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:50.925819  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:50.925857  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:53.470691  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:53.485260  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:53.485332  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:53.524110  188656 cri.go:89] found id: ""
	I0731 21:01:53.524139  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.524148  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:53.524154  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:53.524215  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:53.557642  188656 cri.go:89] found id: ""
	I0731 21:01:53.557668  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.557676  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:53.557682  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:53.557737  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:53.595594  188656 cri.go:89] found id: ""
	I0731 21:01:53.595622  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.595641  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:53.595647  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:53.595712  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:53.634458  188656 cri.go:89] found id: ""
	I0731 21:01:53.634487  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.634499  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:53.634507  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:53.634567  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:53.674124  188656 cri.go:89] found id: ""
	I0731 21:01:53.674149  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.674157  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:53.674164  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:53.674234  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:53.706861  188656 cri.go:89] found id: ""
	I0731 21:01:53.706888  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.706897  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:53.706903  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:53.706957  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:53.745476  188656 cri.go:89] found id: ""
	I0731 21:01:53.745504  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.745511  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:53.745522  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:53.745575  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:53.780847  188656 cri.go:89] found id: ""
	I0731 21:01:53.780878  188656 logs.go:276] 0 containers: []
	W0731 21:01:53.780889  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:53.780902  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:53.780922  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:01:49.589017  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:52.088587  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:54.088885  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:52.109889  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:54.110634  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:54.014383  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:56.512884  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	W0731 21:01:53.853469  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:53.853497  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:53.853517  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:53.930506  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:53.930544  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:53.975439  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:53.975475  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:54.027903  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:54.027937  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:56.542860  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:56.557744  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:56.557813  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:56.596034  188656 cri.go:89] found id: ""
	I0731 21:01:56.596065  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.596075  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:56.596082  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:56.596146  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:56.631531  188656 cri.go:89] found id: ""
	I0731 21:01:56.631561  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.631572  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:56.631579  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:56.631653  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:56.665824  188656 cri.go:89] found id: ""
	I0731 21:01:56.665853  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.665865  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:56.665872  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:56.665940  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:56.698965  188656 cri.go:89] found id: ""
	I0731 21:01:56.698993  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.699002  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:56.699008  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:56.699074  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:56.735314  188656 cri.go:89] found id: ""
	I0731 21:01:56.735347  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.735359  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:56.735367  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:56.735443  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:56.770350  188656 cri.go:89] found id: ""
	I0731 21:01:56.770383  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.770393  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:56.770402  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:56.770485  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:56.808934  188656 cri.go:89] found id: ""
	I0731 21:01:56.808962  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.808970  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:56.808976  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:56.809027  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:56.845305  188656 cri.go:89] found id: ""
	I0731 21:01:56.845331  188656 logs.go:276] 0 containers: []
	W0731 21:01:56.845354  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:56.845366  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:56.845383  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:01:56.922810  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:01:56.922832  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:01:56.922846  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:01:56.998009  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:01:56.998046  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:01:57.037905  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:57.037934  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:57.092438  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:57.092469  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:56.591334  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:59.089696  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:56.110825  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:58.111013  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:00.111696  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:58.513270  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:00.514474  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:01:59.608087  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:01:59.622465  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:01:59.622537  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:01:59.660221  188656 cri.go:89] found id: ""
	I0731 21:01:59.660254  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.660265  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:01:59.660274  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:01:59.660338  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:01:59.696158  188656 cri.go:89] found id: ""
	I0731 21:01:59.696193  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.696205  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:01:59.696213  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:01:59.696272  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:01:59.733607  188656 cri.go:89] found id: ""
	I0731 21:01:59.733635  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.733646  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:01:59.733656  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:01:59.733727  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:01:59.770298  188656 cri.go:89] found id: ""
	I0731 21:01:59.770327  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.770336  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:01:59.770342  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:01:59.770396  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:01:59.805630  188656 cri.go:89] found id: ""
	I0731 21:01:59.805659  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.805670  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:01:59.805682  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:01:59.805749  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:01:59.841064  188656 cri.go:89] found id: ""
	I0731 21:01:59.841089  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.841098  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:01:59.841106  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:01:59.841166  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:01:59.877237  188656 cri.go:89] found id: ""
	I0731 21:01:59.877265  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.877274  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:01:59.877284  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:01:59.877364  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:01:59.917102  188656 cri.go:89] found id: ""
	I0731 21:01:59.917138  188656 logs.go:276] 0 containers: []
	W0731 21:01:59.917166  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:01:59.917179  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:01:59.917196  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:01:59.971806  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:01:59.971846  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:01:59.986267  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:01:59.986304  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:00.063185  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:00.063227  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:00.063244  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:00.148498  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:00.148541  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:02.690235  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:02.704623  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:02.704703  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:02.740557  188656 cri.go:89] found id: ""
	I0731 21:02:02.740588  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.740599  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:02.740606  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:02.740667  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:02.776340  188656 cri.go:89] found id: ""
	I0731 21:02:02.776382  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.776391  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:02.776396  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:02.776449  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:02.811645  188656 cri.go:89] found id: ""
	I0731 21:02:02.811673  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.811683  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:02.811691  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:02.811754  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:02.847226  188656 cri.go:89] found id: ""
	I0731 21:02:02.847259  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.847267  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:02.847273  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:02.847326  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:02.885591  188656 cri.go:89] found id: ""
	I0731 21:02:02.885617  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.885626  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:02.885631  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:02.885694  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:02.924250  188656 cri.go:89] found id: ""
	I0731 21:02:02.924281  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.924289  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:02.924296  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:02.924358  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:02.959608  188656 cri.go:89] found id: ""
	I0731 21:02:02.959638  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.959649  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:02.959657  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:02.959731  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:02.998175  188656 cri.go:89] found id: ""
	I0731 21:02:02.998205  188656 logs.go:276] 0 containers: []
	W0731 21:02:02.998215  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:02.998228  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:02.998248  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:03.053320  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:03.053382  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:03.067681  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:03.067711  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:03.145222  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:03.145251  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:03.145270  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:03.228413  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:03.228456  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:01.590197  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:04.087692  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:02.610477  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:05.110544  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:03.016030  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:05.513082  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:05.780407  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:05.793872  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:05.793952  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:05.828940  188656 cri.go:89] found id: ""
	I0731 21:02:05.828971  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.828980  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:05.828987  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:05.829051  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:05.866470  188656 cri.go:89] found id: ""
	I0731 21:02:05.866503  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.866515  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:05.866522  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:05.866594  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:05.904756  188656 cri.go:89] found id: ""
	I0731 21:02:05.904792  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.904807  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:05.904814  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:05.904868  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:05.941534  188656 cri.go:89] found id: ""
	I0731 21:02:05.941564  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.941574  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:05.941581  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:05.941649  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:05.980413  188656 cri.go:89] found id: ""
	I0731 21:02:05.980453  188656 logs.go:276] 0 containers: []
	W0731 21:02:05.980465  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:05.980472  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:05.980563  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:06.023226  188656 cri.go:89] found id: ""
	I0731 21:02:06.023258  188656 logs.go:276] 0 containers: []
	W0731 21:02:06.023269  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:06.023277  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:06.023345  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:06.061098  188656 cri.go:89] found id: ""
	I0731 21:02:06.061130  188656 logs.go:276] 0 containers: []
	W0731 21:02:06.061138  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:06.061145  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:06.061195  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:06.097825  188656 cri.go:89] found id: ""
	I0731 21:02:06.097852  188656 logs.go:276] 0 containers: []
	W0731 21:02:06.097860  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:06.097870  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:06.097883  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:06.149181  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:06.149223  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:06.164610  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:06.164651  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:06.248639  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:06.248666  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:06.248684  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:06.332445  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:06.332486  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:06.089967  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:08.588610  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:07.610691  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:09.611166  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:07.513999  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:09.514554  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:11.516493  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:08.873697  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:08.887632  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:08.887745  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:08.926002  188656 cri.go:89] found id: ""
	I0731 21:02:08.926032  188656 logs.go:276] 0 containers: []
	W0731 21:02:08.926042  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:08.926051  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:08.926117  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:08.962999  188656 cri.go:89] found id: ""
	I0731 21:02:08.963028  188656 logs.go:276] 0 containers: []
	W0731 21:02:08.963039  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:08.963047  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:08.963103  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:09.023016  188656 cri.go:89] found id: ""
	I0731 21:02:09.023043  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.023051  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:09.023057  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:09.023109  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:09.059672  188656 cri.go:89] found id: ""
	I0731 21:02:09.059699  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.059708  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:09.059714  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:09.059774  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:09.097603  188656 cri.go:89] found id: ""
	I0731 21:02:09.097635  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.097645  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:09.097653  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:09.097720  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:09.136210  188656 cri.go:89] found id: ""
	I0731 21:02:09.136240  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.136251  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:09.136259  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:09.136326  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:09.176167  188656 cri.go:89] found id: ""
	I0731 21:02:09.176204  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.176211  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:09.176218  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:09.176277  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:09.214151  188656 cri.go:89] found id: ""
	I0731 21:02:09.214180  188656 logs.go:276] 0 containers: []
	W0731 21:02:09.214189  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:09.214199  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:09.214212  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:09.267579  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:09.267618  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:09.282420  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:09.282445  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:09.354067  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:09.354092  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:09.354111  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:09.433454  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:09.433500  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:11.979715  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:11.993050  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:11.993123  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:12.027731  188656 cri.go:89] found id: ""
	I0731 21:02:12.027759  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.027767  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:12.027773  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:12.027834  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:12.064410  188656 cri.go:89] found id: ""
	I0731 21:02:12.064442  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.064452  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:12.064459  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:12.064525  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:12.101061  188656 cri.go:89] found id: ""
	I0731 21:02:12.101096  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.101107  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:12.101115  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:12.101176  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:12.142240  188656 cri.go:89] found id: ""
	I0731 21:02:12.142271  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.142284  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:12.142292  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:12.142357  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:12.184949  188656 cri.go:89] found id: ""
	I0731 21:02:12.184980  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.184988  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:12.184994  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:12.185064  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:12.226031  188656 cri.go:89] found id: ""
	I0731 21:02:12.226068  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.226080  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:12.226089  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:12.226155  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:12.272880  188656 cri.go:89] found id: ""
	I0731 21:02:12.272913  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.272923  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:12.272931  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:12.272989  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:12.306968  188656 cri.go:89] found id: ""
	I0731 21:02:12.307011  188656 logs.go:276] 0 containers: []
	W0731 21:02:12.307033  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:12.307068  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:12.307090  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:12.359357  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:12.359402  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:12.374817  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:12.374848  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:12.445107  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:12.445128  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:12.445141  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:12.530017  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:12.530058  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:11.088281  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:13.090442  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:12.110720  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:14.611142  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:14.013967  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:16.014021  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:15.070277  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:15.084326  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:15.084411  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:15.123513  188656 cri.go:89] found id: ""
	I0731 21:02:15.123549  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.123562  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:15.123569  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:15.123624  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:15.159855  188656 cri.go:89] found id: ""
	I0731 21:02:15.159888  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.159899  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:15.159908  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:15.159973  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:15.195879  188656 cri.go:89] found id: ""
	I0731 21:02:15.195911  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.195919  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:15.195926  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:15.195986  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:15.231216  188656 cri.go:89] found id: ""
	I0731 21:02:15.231249  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.231258  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:15.231265  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:15.231331  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:15.265711  188656 cri.go:89] found id: ""
	I0731 21:02:15.265740  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.265748  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:15.265754  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:15.265803  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:15.300991  188656 cri.go:89] found id: ""
	I0731 21:02:15.301020  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.301027  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:15.301033  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:15.301083  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:15.338507  188656 cri.go:89] found id: ""
	I0731 21:02:15.338533  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.338542  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:15.338550  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:15.338614  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:15.375540  188656 cri.go:89] found id: ""
	I0731 21:02:15.375583  188656 logs.go:276] 0 containers: []
	W0731 21:02:15.375595  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:15.375606  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:15.375631  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:15.428903  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:15.428946  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:15.444018  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:15.444052  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:15.518807  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:15.518842  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:15.518859  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:15.602655  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:15.602693  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:18.158731  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:18.172861  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:18.172940  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:18.207451  188656 cri.go:89] found id: ""
	I0731 21:02:18.207480  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.207489  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:18.207495  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:18.207555  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:18.244974  188656 cri.go:89] found id: ""
	I0731 21:02:18.245004  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.245013  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:18.245019  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:18.245079  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:18.281589  188656 cri.go:89] found id: ""
	I0731 21:02:18.281622  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.281630  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:18.281637  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:18.281698  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:18.321413  188656 cri.go:89] found id: ""
	I0731 21:02:18.321445  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.321455  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:18.321461  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:18.321526  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:18.360600  188656 cri.go:89] found id: ""
	I0731 21:02:18.360627  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.360639  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:18.360647  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:18.360707  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:18.396312  188656 cri.go:89] found id: ""
	I0731 21:02:18.396344  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.396356  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:18.396364  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:18.396451  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:18.431586  188656 cri.go:89] found id: ""
	I0731 21:02:18.431618  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.431630  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:18.431637  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:18.431711  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:18.472995  188656 cri.go:89] found id: ""
	I0731 21:02:18.473025  188656 logs.go:276] 0 containers: []
	W0731 21:02:18.473035  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:18.473047  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:18.473063  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:18.558826  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:18.558865  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:18.600083  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:18.600110  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:18.657944  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:18.657988  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:18.672860  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:18.672888  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:18.748806  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:15.589795  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:18.088699  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:17.112784  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:19.609312  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:18.513798  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:21.014437  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:21.249418  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:21.263304  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:21.263385  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:21.298591  188656 cri.go:89] found id: ""
	I0731 21:02:21.298624  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.298635  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:21.298643  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:21.298707  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:21.335913  188656 cri.go:89] found id: ""
	I0731 21:02:21.335939  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.335947  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:21.335954  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:21.336011  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:21.378314  188656 cri.go:89] found id: ""
	I0731 21:02:21.378347  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.378359  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:21.378368  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:21.378436  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:21.422707  188656 cri.go:89] found id: ""
	I0731 21:02:21.422738  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.422748  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:21.422757  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:21.422826  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:21.487851  188656 cri.go:89] found id: ""
	I0731 21:02:21.487878  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.487887  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:21.487893  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:21.487946  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:21.528944  188656 cri.go:89] found id: ""
	I0731 21:02:21.528970  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.528981  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:21.528990  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:21.529054  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:21.565091  188656 cri.go:89] found id: ""
	I0731 21:02:21.565118  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.565126  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:21.565132  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:21.565182  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:21.599985  188656 cri.go:89] found id: ""
	I0731 21:02:21.600015  188656 logs.go:276] 0 containers: []
	W0731 21:02:21.600027  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:21.600041  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:21.600057  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:21.652065  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:21.652106  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:21.666497  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:21.666528  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:21.741853  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:21.741893  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:21.741919  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:21.822478  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:21.822517  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:20.089186  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:22.589558  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:21.610996  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:24.111590  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:23.513209  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:25.514400  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:24.363018  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:24.375640  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:24.375704  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:24.411383  188656 cri.go:89] found id: ""
	I0731 21:02:24.411416  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.411427  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:24.411436  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:24.411513  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:24.447536  188656 cri.go:89] found id: ""
	I0731 21:02:24.447565  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.447573  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:24.447578  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:24.447651  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:24.489270  188656 cri.go:89] found id: ""
	I0731 21:02:24.489301  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.489311  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:24.489320  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:24.489398  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:24.527891  188656 cri.go:89] found id: ""
	I0731 21:02:24.527922  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.527932  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:24.527938  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:24.527998  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:24.566854  188656 cri.go:89] found id: ""
	I0731 21:02:24.566886  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.566897  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:24.566904  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:24.566974  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:24.606234  188656 cri.go:89] found id: ""
	I0731 21:02:24.606267  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.606278  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:24.606285  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:24.606357  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:24.642880  188656 cri.go:89] found id: ""
	I0731 21:02:24.642909  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.642921  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:24.642929  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:24.642982  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:24.680069  188656 cri.go:89] found id: ""
	I0731 21:02:24.680101  188656 logs.go:276] 0 containers: []
	W0731 21:02:24.680112  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:24.680124  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:24.680142  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:24.735337  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:24.735378  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:24.749010  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:24.749040  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:24.826406  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:24.826441  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:24.826458  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:24.906995  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:24.907049  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:27.451405  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:27.474178  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:27.474251  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:27.514912  188656 cri.go:89] found id: ""
	I0731 21:02:27.514938  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.514945  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:27.514951  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:27.515007  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:27.552850  188656 cri.go:89] found id: ""
	I0731 21:02:27.552880  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.552890  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:27.552896  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:27.552953  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:27.590468  188656 cri.go:89] found id: ""
	I0731 21:02:27.590496  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.590503  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:27.590509  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:27.590572  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:27.626295  188656 cri.go:89] found id: ""
	I0731 21:02:27.626322  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.626330  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:27.626339  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:27.626391  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:27.662654  188656 cri.go:89] found id: ""
	I0731 21:02:27.662690  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.662701  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:27.662708  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:27.662770  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:27.699528  188656 cri.go:89] found id: ""
	I0731 21:02:27.699558  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.699566  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:27.699572  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:27.699639  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:27.740501  188656 cri.go:89] found id: ""
	I0731 21:02:27.740528  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.740539  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:27.740547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:27.740613  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:27.778919  188656 cri.go:89] found id: ""
	I0731 21:02:27.778954  188656 logs.go:276] 0 containers: []
	W0731 21:02:27.778966  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:27.778980  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:27.778999  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:27.815475  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:27.815500  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:27.866578  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:27.866615  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:27.880799  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:27.880830  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:27.948987  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:27.949014  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:27.949032  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:24.596180  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:27.088624  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:26.610897  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:29.110263  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:28.014828  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:30.514006  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:30.532314  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:30.546245  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:30.546317  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:30.581736  188656 cri.go:89] found id: ""
	I0731 21:02:30.581763  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.581772  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:30.581778  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:30.581837  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:30.618790  188656 cri.go:89] found id: ""
	I0731 21:02:30.618816  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.618824  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:30.618830  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:30.618886  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:30.654504  188656 cri.go:89] found id: ""
	I0731 21:02:30.654530  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.654538  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:30.654544  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:30.654603  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:30.690570  188656 cri.go:89] found id: ""
	I0731 21:02:30.690598  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.690609  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:30.690617  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:30.690683  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:30.739676  188656 cri.go:89] found id: ""
	I0731 21:02:30.739705  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.739715  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:30.739723  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:30.739789  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:30.777860  188656 cri.go:89] found id: ""
	I0731 21:02:30.777891  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.777902  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:30.777911  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:30.777995  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:30.814036  188656 cri.go:89] found id: ""
	I0731 21:02:30.814073  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.814088  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:30.814096  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:30.814168  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:30.847262  188656 cri.go:89] found id: ""
	I0731 21:02:30.847292  188656 logs.go:276] 0 containers: []
	W0731 21:02:30.847304  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:30.847316  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:30.847338  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:30.898556  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:30.898596  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:30.912940  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:30.912974  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:30.987384  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:30.987405  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:30.987419  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:31.071376  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:31.071416  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:33.613677  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:33.628304  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:33.628380  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:33.662932  188656 cri.go:89] found id: ""
	I0731 21:02:33.662965  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.662977  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:33.662985  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:33.663055  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:33.697445  188656 cri.go:89] found id: ""
	I0731 21:02:33.697477  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.697487  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:33.697493  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:33.697553  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:33.734480  188656 cri.go:89] found id: ""
	I0731 21:02:33.734516  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.734527  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:33.734536  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:33.734614  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:33.770069  188656 cri.go:89] found id: ""
	I0731 21:02:33.770095  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.770104  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:33.770111  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:33.770194  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:33.806315  188656 cri.go:89] found id: ""
	I0731 21:02:33.806341  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.806350  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:33.806356  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:33.806408  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:29.592432  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:32.088842  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:34.089378  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:31.112420  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:33.611815  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:33.014022  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:35.014517  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:37.018514  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:33.842747  188656 cri.go:89] found id: ""
	I0731 21:02:33.842775  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.842782  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:33.842789  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:33.842856  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:33.877581  188656 cri.go:89] found id: ""
	I0731 21:02:33.877607  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.877616  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:33.877622  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:33.877682  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:33.913238  188656 cri.go:89] found id: ""
	I0731 21:02:33.913263  188656 logs.go:276] 0 containers: []
	W0731 21:02:33.913271  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:33.913282  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:33.913298  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:33.967112  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:33.967148  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:33.980961  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:33.980994  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:34.054886  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:34.054917  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:34.054939  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:34.143088  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:34.143127  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:36.687110  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:36.700649  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:36.700725  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:36.737796  188656 cri.go:89] found id: ""
	I0731 21:02:36.737829  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.737841  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:36.737849  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:36.737916  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:36.773010  188656 cri.go:89] found id: ""
	I0731 21:02:36.773048  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.773059  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:36.773067  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:36.773136  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:36.813945  188656 cri.go:89] found id: ""
	I0731 21:02:36.813978  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.813988  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:36.813994  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:36.814047  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:36.849826  188656 cri.go:89] found id: ""
	I0731 21:02:36.849860  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.849872  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:36.849880  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:36.849943  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:36.887200  188656 cri.go:89] found id: ""
	I0731 21:02:36.887233  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.887244  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:36.887253  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:36.887391  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:36.922529  188656 cri.go:89] found id: ""
	I0731 21:02:36.922562  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.922573  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:36.922582  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:36.922644  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:36.958119  188656 cri.go:89] found id: ""
	I0731 21:02:36.958154  188656 logs.go:276] 0 containers: []
	W0731 21:02:36.958166  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:36.958174  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:36.958240  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:37.001071  188656 cri.go:89] found id: ""
	I0731 21:02:37.001104  188656 logs.go:276] 0 containers: []
	W0731 21:02:37.001113  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:37.001123  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:37.001136  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:37.041248  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:37.041288  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:37.100519  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:37.100558  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:37.115157  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:37.115188  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:37.191232  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:37.191259  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:37.191277  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:36.588213  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:38.589224  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:36.109307  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:38.110675  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:40.111284  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:39.514052  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:42.013265  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:39.772834  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:39.788137  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:39.788203  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:39.827329  188656 cri.go:89] found id: ""
	I0731 21:02:39.827361  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.827371  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:39.827378  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:39.827458  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:39.864855  188656 cri.go:89] found id: ""
	I0731 21:02:39.864882  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.864889  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:39.864897  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:39.864958  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:39.901955  188656 cri.go:89] found id: ""
	I0731 21:02:39.901981  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.901990  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:39.901996  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:39.902059  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:39.941376  188656 cri.go:89] found id: ""
	I0731 21:02:39.941402  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.941412  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:39.941418  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:39.941473  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:39.975321  188656 cri.go:89] found id: ""
	I0731 21:02:39.975352  188656 logs.go:276] 0 containers: []
	W0731 21:02:39.975364  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:39.975394  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:39.975465  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:40.010106  188656 cri.go:89] found id: ""
	I0731 21:02:40.010136  188656 logs.go:276] 0 containers: []
	W0731 21:02:40.010148  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:40.010157  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:40.010220  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:40.043963  188656 cri.go:89] found id: ""
	I0731 21:02:40.043997  188656 logs.go:276] 0 containers: []
	W0731 21:02:40.044009  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:40.044017  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:40.044089  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:40.079178  188656 cri.go:89] found id: ""
	I0731 21:02:40.079216  188656 logs.go:276] 0 containers: []
	W0731 21:02:40.079224  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:40.079234  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:40.079248  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:40.141115  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:40.141158  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:40.156722  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:40.156758  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:40.233758  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:40.233782  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:40.233797  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:40.317316  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:40.317375  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:42.858649  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:42.872135  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:42.872221  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:42.911966  188656 cri.go:89] found id: ""
	I0731 21:02:42.911998  188656 logs.go:276] 0 containers: []
	W0731 21:02:42.912007  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:42.912014  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:42.912081  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:42.950036  188656 cri.go:89] found id: ""
	I0731 21:02:42.950070  188656 logs.go:276] 0 containers: []
	W0731 21:02:42.950079  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:42.950085  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:42.950138  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:42.987201  188656 cri.go:89] found id: ""
	I0731 21:02:42.987233  188656 logs.go:276] 0 containers: []
	W0731 21:02:42.987245  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:42.987253  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:42.987326  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:43.027250  188656 cri.go:89] found id: ""
	I0731 21:02:43.027285  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.027297  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:43.027306  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:43.027374  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:43.063419  188656 cri.go:89] found id: ""
	I0731 21:02:43.063448  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.063456  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:43.063463  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:43.063527  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:43.101155  188656 cri.go:89] found id: ""
	I0731 21:02:43.101184  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.101193  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:43.101199  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:43.101249  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:43.142633  188656 cri.go:89] found id: ""
	I0731 21:02:43.142658  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.142667  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:43.142675  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:43.142741  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:43.177747  188656 cri.go:89] found id: ""
	I0731 21:02:43.177780  188656 logs.go:276] 0 containers: []
	W0731 21:02:43.177789  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:43.177799  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:43.177813  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:43.228074  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:43.228114  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:43.242132  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:43.242165  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:43.313026  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:43.313054  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:43.313072  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:43.394620  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:43.394663  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:40.589306  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:42.589428  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:42.612236  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:45.110401  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:44.513370  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:46.514350  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:45.937932  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:45.951871  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:45.951964  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:45.987615  188656 cri.go:89] found id: ""
	I0731 21:02:45.987642  188656 logs.go:276] 0 containers: []
	W0731 21:02:45.987650  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:45.987656  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:45.987715  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:46.022632  188656 cri.go:89] found id: ""
	I0731 21:02:46.022659  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.022667  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:46.022674  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:46.022746  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:46.061153  188656 cri.go:89] found id: ""
	I0731 21:02:46.061182  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.061191  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:46.061196  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:46.061246  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:46.099168  188656 cri.go:89] found id: ""
	I0731 21:02:46.099197  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.099206  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:46.099212  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:46.099266  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:46.137269  188656 cri.go:89] found id: ""
	I0731 21:02:46.137300  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.137312  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:46.137321  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:46.137403  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:46.172330  188656 cri.go:89] found id: ""
	I0731 21:02:46.172391  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.172404  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:46.172417  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:46.172489  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:46.213314  188656 cri.go:89] found id: ""
	I0731 21:02:46.213358  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.213370  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:46.213378  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:46.213451  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:46.248663  188656 cri.go:89] found id: ""
	I0731 21:02:46.248697  188656 logs.go:276] 0 containers: []
	W0731 21:02:46.248707  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:46.248719  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:46.248735  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:46.305433  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:46.305472  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:46.319065  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:46.319098  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:46.387025  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:46.387046  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:46.387058  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:46.476721  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:46.476769  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:44.589757  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:47.089954  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:47.112823  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:49.114163  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:49.014193  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:51.014760  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:49.020882  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:49.036502  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:49.036573  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:49.076478  188656 cri.go:89] found id: ""
	I0731 21:02:49.076509  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.076518  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:49.076525  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:49.076578  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:49.116065  188656 cri.go:89] found id: ""
	I0731 21:02:49.116098  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.116106  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:49.116112  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:49.116168  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:49.153237  188656 cri.go:89] found id: ""
	I0731 21:02:49.153274  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.153287  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:49.153295  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:49.153385  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:49.192821  188656 cri.go:89] found id: ""
	I0731 21:02:49.192849  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.192858  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:49.192864  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:49.192918  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:49.230627  188656 cri.go:89] found id: ""
	I0731 21:02:49.230660  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.230671  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:49.230679  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:49.230749  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:49.266575  188656 cri.go:89] found id: ""
	I0731 21:02:49.266603  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.266611  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:49.266617  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:49.266688  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:49.312489  188656 cri.go:89] found id: ""
	I0731 21:02:49.312522  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.312533  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:49.312541  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:49.312613  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:49.348907  188656 cri.go:89] found id: ""
	I0731 21:02:49.348932  188656 logs.go:276] 0 containers: []
	W0731 21:02:49.348941  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:49.348950  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:49.348965  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:49.363229  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:49.363267  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:49.435708  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:49.435732  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:49.435745  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:49.522002  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:49.522047  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:49.566823  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:49.566868  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:52.122660  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:52.136559  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:52.136629  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:52.173198  188656 cri.go:89] found id: ""
	I0731 21:02:52.173227  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.173236  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:52.173242  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:52.173310  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:52.208464  188656 cri.go:89] found id: ""
	I0731 21:02:52.208503  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.208514  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:52.208521  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:52.208590  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:52.246052  188656 cri.go:89] found id: ""
	I0731 21:02:52.246084  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.246091  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:52.246098  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:52.246160  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:52.281798  188656 cri.go:89] found id: ""
	I0731 21:02:52.281831  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.281843  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:52.281852  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:52.281918  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:52.318924  188656 cri.go:89] found id: ""
	I0731 21:02:52.318954  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.318975  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:52.318983  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:52.319052  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:52.356752  188656 cri.go:89] found id: ""
	I0731 21:02:52.356788  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.356800  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:52.356809  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:52.356874  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:52.391507  188656 cri.go:89] found id: ""
	I0731 21:02:52.391537  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.391545  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:52.391551  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:52.391602  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:52.430714  188656 cri.go:89] found id: ""
	I0731 21:02:52.430749  188656 logs.go:276] 0 containers: []
	W0731 21:02:52.430761  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:52.430774  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:52.430792  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:52.482600  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:52.482629  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:52.535317  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:52.535361  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:52.549835  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:52.549874  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:52.628319  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:52.628347  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:52.628365  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:49.590499  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:52.089170  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:54.089832  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:51.610237  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:54.112782  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:53.513932  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:55.516784  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:55.216678  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:55.231142  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:55.231225  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:55.266283  188656 cri.go:89] found id: ""
	I0731 21:02:55.266321  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.266334  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:55.266341  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:55.266399  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:55.301457  188656 cri.go:89] found id: ""
	I0731 21:02:55.301493  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.301506  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:55.301514  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:55.301574  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:55.338427  188656 cri.go:89] found id: ""
	I0731 21:02:55.338453  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.338461  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:55.338467  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:55.338521  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:55.373718  188656 cri.go:89] found id: ""
	I0731 21:02:55.373748  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.373757  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:55.373764  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:55.373846  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:55.410989  188656 cri.go:89] found id: ""
	I0731 21:02:55.411022  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.411034  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:55.411042  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:55.411100  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:55.452867  188656 cri.go:89] found id: ""
	I0731 21:02:55.452904  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.452915  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:55.452924  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:55.452995  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:55.512781  188656 cri.go:89] found id: ""
	I0731 21:02:55.512809  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.512821  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:55.512829  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:55.512894  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:55.550460  188656 cri.go:89] found id: ""
	I0731 21:02:55.550487  188656 logs.go:276] 0 containers: []
	W0731 21:02:55.550495  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:55.550505  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:55.550521  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:55.625776  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:02:55.625804  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:55.625821  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:55.711276  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:55.711322  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:55.765078  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:55.765111  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:55.818131  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:55.818176  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:58.332914  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:02:58.346908  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:02:58.346992  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:02:58.383641  188656 cri.go:89] found id: ""
	I0731 21:02:58.383686  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.383695  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:02:58.383700  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:02:58.383753  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:02:58.419538  188656 cri.go:89] found id: ""
	I0731 21:02:58.419566  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.419576  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:02:58.419584  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:02:58.419649  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:02:58.457036  188656 cri.go:89] found id: ""
	I0731 21:02:58.457069  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.457080  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:02:58.457088  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:02:58.457162  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:02:58.497596  188656 cri.go:89] found id: ""
	I0731 21:02:58.497621  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.497629  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:02:58.497635  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:02:58.497706  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:02:58.538184  188656 cri.go:89] found id: ""
	I0731 21:02:58.538211  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.538220  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:02:58.538226  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:02:58.538291  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:02:58.584428  188656 cri.go:89] found id: ""
	I0731 21:02:58.584457  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.584468  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:02:58.584476  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:02:58.584537  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:02:58.625052  188656 cri.go:89] found id: ""
	I0731 21:02:58.625084  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.625096  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:02:58.625103  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:02:58.625171  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:02:58.662222  188656 cri.go:89] found id: ""
	I0731 21:02:58.662248  188656 logs.go:276] 0 containers: []
	W0731 21:02:58.662256  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:02:58.662266  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:02:58.662278  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:02:58.740491  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:02:58.740530  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:02:58.782685  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:02:58.782714  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:02:58.833620  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:02:58.833668  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:02:56.091277  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:58.589516  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:56.609399  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:58.610957  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:58.013927  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:00.015179  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:02:58.848679  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:02:58.848713  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:02:58.925496  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:01.426171  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:01.440261  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:01.440341  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:01.477362  188656 cri.go:89] found id: ""
	I0731 21:03:01.477393  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.477405  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:01.477414  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:01.477483  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:01.516640  188656 cri.go:89] found id: ""
	I0731 21:03:01.516675  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.516692  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:01.516701  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:01.516764  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:01.560713  188656 cri.go:89] found id: ""
	I0731 21:03:01.560744  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.560756  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:01.560762  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:01.560844  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:01.604050  188656 cri.go:89] found id: ""
	I0731 21:03:01.604086  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.604097  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:01.604105  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:01.604170  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:01.641358  188656 cri.go:89] found id: ""
	I0731 21:03:01.641391  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.641401  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:01.641406  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:01.641471  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:01.677332  188656 cri.go:89] found id: ""
	I0731 21:03:01.677380  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.677390  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:01.677397  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:01.677459  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:01.713781  188656 cri.go:89] found id: ""
	I0731 21:03:01.713815  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.713826  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:01.713833  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:01.713914  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:01.757499  188656 cri.go:89] found id: ""
	I0731 21:03:01.757543  188656 logs.go:276] 0 containers: []
	W0731 21:03:01.757552  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:01.757563  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:01.757575  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:01.832330  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:01.832370  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:01.832384  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:01.918996  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:01.919050  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:01.979268  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:01.979307  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:02.037528  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:02.037564  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:00.591373  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:03.089405  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:01.110471  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:03.611348  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:02.513998  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:05.015060  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:04.552758  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:04.566881  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:04.566960  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:04.604631  188656 cri.go:89] found id: ""
	I0731 21:03:04.604669  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.604680  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:04.604688  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:04.604791  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:04.644027  188656 cri.go:89] found id: ""
	I0731 21:03:04.644052  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.644061  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:04.644068  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:04.644134  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:04.680010  188656 cri.go:89] found id: ""
	I0731 21:03:04.680037  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.680045  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:04.680050  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:04.680102  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:04.717095  188656 cri.go:89] found id: ""
	I0731 21:03:04.717123  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.717133  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:04.717140  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:04.717212  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:04.755297  188656 cri.go:89] found id: ""
	I0731 21:03:04.755324  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.755331  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:04.755337  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:04.755387  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:04.792073  188656 cri.go:89] found id: ""
	I0731 21:03:04.792104  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.792113  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:04.792119  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:04.792168  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:04.828428  188656 cri.go:89] found id: ""
	I0731 21:03:04.828460  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.828468  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:04.828475  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:04.828541  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:04.863871  188656 cri.go:89] found id: ""
	I0731 21:03:04.863905  188656 logs.go:276] 0 containers: []
	W0731 21:03:04.863916  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:04.863929  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:04.863946  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:04.879591  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:04.879626  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:04.962199  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:04.962227  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:04.962245  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:05.048502  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:05.048547  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:05.090812  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:05.090838  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:07.647307  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:07.664586  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:07.664656  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:07.719851  188656 cri.go:89] found id: ""
	I0731 21:03:07.719887  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.719899  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:07.719908  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:07.719978  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:07.778295  188656 cri.go:89] found id: ""
	I0731 21:03:07.778330  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.778343  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:07.778350  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:07.778417  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:07.817911  188656 cri.go:89] found id: ""
	I0731 21:03:07.817937  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.817947  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:07.817954  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:07.818004  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:07.853177  188656 cri.go:89] found id: ""
	I0731 21:03:07.853211  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.853222  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:07.853229  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:07.853308  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:07.888992  188656 cri.go:89] found id: ""
	I0731 21:03:07.889020  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.889046  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:07.889055  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:07.889133  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:07.924327  188656 cri.go:89] found id: ""
	I0731 21:03:07.924358  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.924369  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:07.924377  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:07.924461  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:07.964438  188656 cri.go:89] found id: ""
	I0731 21:03:07.964470  188656 logs.go:276] 0 containers: []
	W0731 21:03:07.964480  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:07.964489  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:07.964572  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:08.003566  188656 cri.go:89] found id: ""
	I0731 21:03:08.003610  188656 logs.go:276] 0 containers: []
	W0731 21:03:08.003621  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:08.003634  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:08.003651  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:08.044246  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:08.044286  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:08.097479  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:08.097517  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:08.113636  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:08.113663  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:08.187217  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:08.187244  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:08.187261  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:05.090205  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:07.589488  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:06.110184  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:08.111598  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:10.611986  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:07.513036  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:09.513637  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:11.514176  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:10.771248  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:10.786159  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:10.786232  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:10.823724  188656 cri.go:89] found id: ""
	I0731 21:03:10.823756  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.823769  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:10.823777  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:10.823846  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:10.862440  188656 cri.go:89] found id: ""
	I0731 21:03:10.862468  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.862480  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:10.862488  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:10.862544  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:10.901499  188656 cri.go:89] found id: ""
	I0731 21:03:10.901527  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.901539  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:10.901547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:10.901611  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:10.940255  188656 cri.go:89] found id: ""
	I0731 21:03:10.940279  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.940287  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:10.940293  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:10.940356  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:10.975315  188656 cri.go:89] found id: ""
	I0731 21:03:10.975344  188656 logs.go:276] 0 containers: []
	W0731 21:03:10.975353  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:10.975360  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:10.975420  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:11.011453  188656 cri.go:89] found id: ""
	I0731 21:03:11.011482  188656 logs.go:276] 0 containers: []
	W0731 21:03:11.011538  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:11.011549  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:11.011611  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:11.047846  188656 cri.go:89] found id: ""
	I0731 21:03:11.047887  188656 logs.go:276] 0 containers: []
	W0731 21:03:11.047899  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:11.047907  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:11.047972  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:11.086243  188656 cri.go:89] found id: ""
	I0731 21:03:11.086271  188656 logs.go:276] 0 containers: []
	W0731 21:03:11.086282  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:11.086293  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:11.086309  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:11.139390  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:11.139430  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:11.154637  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:11.154669  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:11.225996  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:11.226019  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:11.226035  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:11.305235  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:11.305280  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:09.589831  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:11.590312  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:14.089750  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:13.110191  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:15.112258  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:14.013609  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:16.014143  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:13.845792  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:13.859185  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:13.859261  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:13.896017  188656 cri.go:89] found id: ""
	I0731 21:03:13.896047  188656 logs.go:276] 0 containers: []
	W0731 21:03:13.896055  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:13.896061  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:13.896123  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:13.932442  188656 cri.go:89] found id: ""
	I0731 21:03:13.932475  188656 logs.go:276] 0 containers: []
	W0731 21:03:13.932486  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:13.932494  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:13.932564  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:13.971233  188656 cri.go:89] found id: ""
	I0731 21:03:13.971265  188656 logs.go:276] 0 containers: []
	W0731 21:03:13.971274  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:13.971280  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:13.971331  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:14.009757  188656 cri.go:89] found id: ""
	I0731 21:03:14.009787  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.009796  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:14.009805  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:14.009870  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:14.047946  188656 cri.go:89] found id: ""
	I0731 21:03:14.047979  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.047990  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:14.047998  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:14.048056  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:14.084687  188656 cri.go:89] found id: ""
	I0731 21:03:14.084720  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.084731  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:14.084739  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:14.084805  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:14.124831  188656 cri.go:89] found id: ""
	I0731 21:03:14.124861  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.124870  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:14.124876  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:14.124929  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:14.161242  188656 cri.go:89] found id: ""
	I0731 21:03:14.161275  188656 logs.go:276] 0 containers: []
	W0731 21:03:14.161286  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:14.161295  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:14.161308  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:14.241060  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:14.241115  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:14.282382  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:14.282414  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:14.335201  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:14.335249  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:14.351345  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:14.351379  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:14.436524  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:16.937313  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:16.951403  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:16.951490  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:16.991735  188656 cri.go:89] found id: ""
	I0731 21:03:16.991766  188656 logs.go:276] 0 containers: []
	W0731 21:03:16.991777  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:16.991785  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:16.991852  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:17.030327  188656 cri.go:89] found id: ""
	I0731 21:03:17.030353  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.030360  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:17.030366  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:17.030419  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:17.068161  188656 cri.go:89] found id: ""
	I0731 21:03:17.068195  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.068206  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:17.068214  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:17.068286  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:17.105561  188656 cri.go:89] found id: ""
	I0731 21:03:17.105590  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.105601  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:17.105609  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:17.105684  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:17.144503  188656 cri.go:89] found id: ""
	I0731 21:03:17.144529  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.144540  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:17.144547  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:17.144610  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:17.183709  188656 cri.go:89] found id: ""
	I0731 21:03:17.183738  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.183747  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:17.183753  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:17.183815  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:17.222083  188656 cri.go:89] found id: ""
	I0731 21:03:17.222109  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.222117  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:17.222124  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:17.222178  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:17.259503  188656 cri.go:89] found id: ""
	I0731 21:03:17.259534  188656 logs.go:276] 0 containers: []
	W0731 21:03:17.259547  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:17.259561  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:17.259578  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:17.300603  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:17.300642  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:17.352194  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:17.352235  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:17.367179  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:17.367209  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:17.440051  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:17.440074  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:17.440088  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:16.589914  188133 pod_ready.go:102] pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:18.082985  188133 pod_ready.go:81] duration metric: took 4m0.000734125s for pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace to be "Ready" ...
	E0731 21:03:18.083015  188133 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-jrzgg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0731 21:03:18.083039  188133 pod_ready.go:38] duration metric: took 4m12.543404692s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:03:18.083069  188133 kubeadm.go:597] duration metric: took 4m20.473129745s to restartPrimaryControlPlane
	W0731 21:03:18.083176  188133 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 21:03:18.083210  188133 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:03:17.610274  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:19.611592  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:18.514266  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:20.514967  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:20.027644  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:20.041735  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:20.041826  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:20.077436  188656 cri.go:89] found id: ""
	I0731 21:03:20.077470  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.077483  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:20.077491  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:20.077558  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:20.117420  188656 cri.go:89] found id: ""
	I0731 21:03:20.117449  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.117459  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:20.117466  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:20.117533  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:20.157794  188656 cri.go:89] found id: ""
	I0731 21:03:20.157827  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.157838  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:20.157847  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:20.157914  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:20.193760  188656 cri.go:89] found id: ""
	I0731 21:03:20.193788  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.193796  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:20.193803  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:20.193856  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:20.231731  188656 cri.go:89] found id: ""
	I0731 21:03:20.231764  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.231777  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:20.231785  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:20.231856  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:20.268666  188656 cri.go:89] found id: ""
	I0731 21:03:20.268697  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.268709  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:20.268717  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:20.268786  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:20.304355  188656 cri.go:89] found id: ""
	I0731 21:03:20.304392  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.304406  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:20.304414  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:20.304478  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:20.343886  188656 cri.go:89] found id: ""
	I0731 21:03:20.343915  188656 logs.go:276] 0 containers: []
	W0731 21:03:20.343927  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:20.343940  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:20.343957  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:20.358460  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:20.358494  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:20.435473  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:20.435499  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:20.435522  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:20.517961  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:20.518002  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:20.561528  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:20.561567  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:23.119570  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:23.134276  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:23.134366  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:23.172808  188656 cri.go:89] found id: ""
	I0731 21:03:23.172837  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.172846  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:23.172852  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:23.172914  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:23.208038  188656 cri.go:89] found id: ""
	I0731 21:03:23.208067  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.208080  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:23.208086  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:23.208140  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:23.244493  188656 cri.go:89] found id: ""
	I0731 21:03:23.244523  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.244533  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:23.244539  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:23.244605  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:23.280474  188656 cri.go:89] found id: ""
	I0731 21:03:23.280503  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.280510  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:23.280517  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:23.280581  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:23.317381  188656 cri.go:89] found id: ""
	I0731 21:03:23.317415  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.317428  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:23.317441  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:23.317511  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:23.357023  188656 cri.go:89] found id: ""
	I0731 21:03:23.357051  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.357062  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:23.357071  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:23.357134  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:23.400176  188656 cri.go:89] found id: ""
	I0731 21:03:23.400211  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.400223  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:23.400230  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:23.400298  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:23.440157  188656 cri.go:89] found id: ""
	I0731 21:03:23.440190  188656 logs.go:276] 0 containers: []
	W0731 21:03:23.440201  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:23.440213  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:23.440234  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:23.494762  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:23.494802  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:23.511463  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:23.511510  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:23.600359  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:23.600383  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:23.600403  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:23.682683  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:23.682723  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:22.111495  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:24.112248  188266 pod_ready.go:102] pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:23.013460  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:25.014605  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:27.014900  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:26.225923  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:26.245708  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:26.245791  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:26.282882  188656 cri.go:89] found id: ""
	I0731 21:03:26.282910  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.282920  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:26.282928  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:26.282987  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:26.324227  188656 cri.go:89] found id: ""
	I0731 21:03:26.324268  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.324279  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:26.324287  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:26.324349  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:26.365996  188656 cri.go:89] found id: ""
	I0731 21:03:26.366027  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.366038  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:26.366047  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:26.366119  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:26.403790  188656 cri.go:89] found id: ""
	I0731 21:03:26.403823  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.403835  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:26.403844  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:26.403915  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:26.442924  188656 cri.go:89] found id: ""
	I0731 21:03:26.442947  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.442957  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:26.442964  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:26.443026  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:26.482260  188656 cri.go:89] found id: ""
	I0731 21:03:26.482286  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.482294  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:26.482300  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:26.482364  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:26.526385  188656 cri.go:89] found id: ""
	I0731 21:03:26.526420  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.526432  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:26.526442  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:26.526511  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:26.565217  188656 cri.go:89] found id: ""
	I0731 21:03:26.565250  188656 logs.go:276] 0 containers: []
	W0731 21:03:26.565262  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:26.565275  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:26.565294  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:26.623437  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:26.623478  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:26.639642  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:26.639683  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:26.720274  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:26.720309  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:26.720325  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:26.799689  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:26.799728  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:26.111147  188266 pod_ready.go:81] duration metric: took 4m0.007359775s for pod "metrics-server-569cc877fc-jf52w" in "kube-system" namespace to be "Ready" ...
	E0731 21:03:26.111173  188266 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0731 21:03:26.111180  188266 pod_ready.go:38] duration metric: took 4m2.82978193s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:03:26.111195  188266 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:03:26.111220  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:26.111267  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:26.179210  188266 cri.go:89] found id: "89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:26.179240  188266 cri.go:89] found id: ""
	I0731 21:03:26.179251  188266 logs.go:276] 1 containers: [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718]
	I0731 21:03:26.179315  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.184349  188266 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:26.184430  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:26.221238  188266 cri.go:89] found id: "d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:26.221267  188266 cri.go:89] found id: ""
	I0731 21:03:26.221277  188266 logs.go:276] 1 containers: [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c]
	I0731 21:03:26.221349  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.225908  188266 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:26.225985  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:26.276864  188266 cri.go:89] found id: "987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:26.276895  188266 cri.go:89] found id: ""
	I0731 21:03:26.276907  188266 logs.go:276] 1 containers: [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025]
	I0731 21:03:26.276974  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.281921  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:26.282003  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:26.320868  188266 cri.go:89] found id: "936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:26.320903  188266 cri.go:89] found id: ""
	I0731 21:03:26.320914  188266 logs.go:276] 1 containers: [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447]
	I0731 21:03:26.320984  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.326203  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:26.326272  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:26.378409  188266 cri.go:89] found id: "c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:26.378433  188266 cri.go:89] found id: ""
	I0731 21:03:26.378442  188266 logs.go:276] 1 containers: [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e]
	I0731 21:03:26.378504  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.384006  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:26.384111  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:26.431113  188266 cri.go:89] found id: "c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:26.431147  188266 cri.go:89] found id: ""
	I0731 21:03:26.431158  188266 logs.go:276] 1 containers: [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085]
	I0731 21:03:26.431226  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.437136  188266 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:26.437213  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:26.484223  188266 cri.go:89] found id: ""
	I0731 21:03:26.484247  188266 logs.go:276] 0 containers: []
	W0731 21:03:26.484257  188266 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:26.484263  188266 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:03:26.484319  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:03:26.530433  188266 cri.go:89] found id: "701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:26.530470  188266 cri.go:89] found id: "23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:26.530476  188266 cri.go:89] found id: ""
	I0731 21:03:26.530486  188266 logs.go:276] 2 containers: [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f]
	I0731 21:03:26.530551  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.535747  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:26.541379  188266 logs.go:123] Gathering logs for storage-provisioner [23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f] ...
	I0731 21:03:26.541406  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:26.586730  188266 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:26.586769  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:27.133617  188266 logs.go:123] Gathering logs for kube-apiserver [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718] ...
	I0731 21:03:27.133672  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:27.183805  188266 logs.go:123] Gathering logs for etcd [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c] ...
	I0731 21:03:27.183846  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:27.226579  188266 logs.go:123] Gathering logs for kube-proxy [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e] ...
	I0731 21:03:27.226620  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:27.290635  188266 logs.go:123] Gathering logs for coredns [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025] ...
	I0731 21:03:27.290671  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:27.330700  188266 logs.go:123] Gathering logs for kube-scheduler [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447] ...
	I0731 21:03:27.330732  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:27.370882  188266 logs.go:123] Gathering logs for kube-controller-manager [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085] ...
	I0731 21:03:27.370918  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:27.426426  188266 logs.go:123] Gathering logs for storage-provisioner [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173] ...
	I0731 21:03:27.426471  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:27.466359  188266 logs.go:123] Gathering logs for container status ...
	I0731 21:03:27.466396  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:27.515202  188266 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:27.515235  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:27.569081  188266 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:27.569122  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:27.586776  188266 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:27.586809  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:03:30.223314  188266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:30.241046  188266 api_server.go:72] duration metric: took 4m14.179869513s to wait for apiserver process to appear ...
	I0731 21:03:30.241073  188266 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:03:30.241118  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:30.241188  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:30.281267  188266 cri.go:89] found id: "89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:30.281303  188266 cri.go:89] found id: ""
	I0731 21:03:30.281314  188266 logs.go:276] 1 containers: [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718]
	I0731 21:03:30.281397  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.285857  188266 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:30.285927  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:30.321742  188266 cri.go:89] found id: "d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:30.321770  188266 cri.go:89] found id: ""
	I0731 21:03:30.321779  188266 logs.go:276] 1 containers: [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c]
	I0731 21:03:30.321841  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.326210  188266 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:30.326284  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:30.367998  188266 cri.go:89] found id: "987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:30.368025  188266 cri.go:89] found id: ""
	I0731 21:03:30.368036  188266 logs.go:276] 1 containers: [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025]
	I0731 21:03:30.368101  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.372340  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:30.372412  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:30.413689  188266 cri.go:89] found id: "936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:30.413714  188266 cri.go:89] found id: ""
	I0731 21:03:30.413725  188266 logs.go:276] 1 containers: [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447]
	I0731 21:03:30.413789  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.418525  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:30.418604  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:30.458505  188266 cri.go:89] found id: "c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:30.458530  188266 cri.go:89] found id: ""
	I0731 21:03:30.458539  188266 logs.go:276] 1 containers: [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e]
	I0731 21:03:30.458587  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.462993  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:30.463058  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:30.500683  188266 cri.go:89] found id: "c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:30.500711  188266 cri.go:89] found id: ""
	I0731 21:03:30.500722  188266 logs.go:276] 1 containers: [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085]
	I0731 21:03:30.500785  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.506197  188266 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:30.506277  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:30.545243  188266 cri.go:89] found id: ""
	I0731 21:03:30.545273  188266 logs.go:276] 0 containers: []
	W0731 21:03:30.545284  188266 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:30.545290  188266 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:03:30.545371  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:03:30.588405  188266 cri.go:89] found id: "701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:30.588459  188266 cri.go:89] found id: "23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:30.588465  188266 cri.go:89] found id: ""
	I0731 21:03:30.588474  188266 logs.go:276] 2 containers: [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f]
	I0731 21:03:30.588539  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.593611  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:30.599345  188266 logs.go:123] Gathering logs for kube-proxy [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e] ...
	I0731 21:03:30.599386  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:30.641530  188266 logs.go:123] Gathering logs for kube-controller-manager [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085] ...
	I0731 21:03:30.641564  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:30.703655  188266 logs.go:123] Gathering logs for storage-provisioner [23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f] ...
	I0731 21:03:30.703692  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:30.744119  188266 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:30.744147  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:29.515238  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:32.014503  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:29.351214  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:29.365487  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:29.365561  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:29.402989  188656 cri.go:89] found id: ""
	I0731 21:03:29.403015  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.403022  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:29.403028  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:29.403079  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:29.443276  188656 cri.go:89] found id: ""
	I0731 21:03:29.443310  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.443321  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:29.443329  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:29.443397  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:29.483285  188656 cri.go:89] found id: ""
	I0731 21:03:29.483311  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.483319  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:29.483326  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:29.483384  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:29.522285  188656 cri.go:89] found id: ""
	I0731 21:03:29.522317  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.522329  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:29.522337  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:29.522406  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:29.565115  188656 cri.go:89] found id: ""
	I0731 21:03:29.565145  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.565155  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:29.565163  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:29.565233  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:29.603768  188656 cri.go:89] found id: ""
	I0731 21:03:29.603805  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.603816  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:29.603822  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:29.603875  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:29.640380  188656 cri.go:89] found id: ""
	I0731 21:03:29.640406  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.640416  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:29.640424  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:29.640493  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:29.679699  188656 cri.go:89] found id: ""
	I0731 21:03:29.679727  188656 logs.go:276] 0 containers: []
	W0731 21:03:29.679736  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:29.679749  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:29.679764  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:29.735555  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:29.735603  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:29.749670  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:29.749708  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:29.825950  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:29.825973  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:29.825989  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:29.915420  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:29.915463  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:32.462996  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:32.478659  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:32.478739  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:32.528625  188656 cri.go:89] found id: ""
	I0731 21:03:32.528651  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.528659  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:03:32.528665  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:32.528724  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:32.574371  188656 cri.go:89] found id: ""
	I0731 21:03:32.574399  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.574408  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:03:32.574414  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:32.574474  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:32.616916  188656 cri.go:89] found id: ""
	I0731 21:03:32.616960  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.616970  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:03:32.616975  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:32.617040  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:32.657725  188656 cri.go:89] found id: ""
	I0731 21:03:32.657758  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.657769  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:03:32.657777  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:32.657842  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:32.693197  188656 cri.go:89] found id: ""
	I0731 21:03:32.693226  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.693237  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:03:32.693245  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:32.693316  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:32.733567  188656 cri.go:89] found id: ""
	I0731 21:03:32.733594  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.733602  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:03:32.733608  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:32.733670  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:32.774624  188656 cri.go:89] found id: ""
	I0731 21:03:32.774659  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.774671  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:32.774679  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:03:32.774747  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:03:32.811755  188656 cri.go:89] found id: ""
	I0731 21:03:32.811790  188656 logs.go:276] 0 containers: []
	W0731 21:03:32.811809  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:03:32.811822  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:32.811835  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:32.825512  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:32.825544  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:03:32.902310  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:03:32.902339  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:32.902366  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:32.983347  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:03:32.983391  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:33.028037  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:33.028068  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:31.165988  188266 logs.go:123] Gathering logs for container status ...
	I0731 21:03:31.166042  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:31.209564  188266 logs.go:123] Gathering logs for kube-apiserver [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718] ...
	I0731 21:03:31.209605  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:31.254061  188266 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:31.254105  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:31.269227  188266 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:31.269266  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:03:31.394442  188266 logs.go:123] Gathering logs for etcd [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c] ...
	I0731 21:03:31.394477  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:31.439011  188266 logs.go:123] Gathering logs for coredns [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025] ...
	I0731 21:03:31.439047  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:31.476798  188266 logs.go:123] Gathering logs for kube-scheduler [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447] ...
	I0731 21:03:31.476825  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:31.524460  188266 logs.go:123] Gathering logs for storage-provisioner [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173] ...
	I0731 21:03:31.524491  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:31.564254  188266 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:31.564288  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:34.122836  188266 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0731 21:03:34.128516  188266 api_server.go:279] https://192.168.50.221:8444/healthz returned 200:
	ok
	I0731 21:03:34.129484  188266 api_server.go:141] control plane version: v1.30.3
	I0731 21:03:34.129513  188266 api_server.go:131] duration metric: took 3.888432526s to wait for apiserver health ...
	I0731 21:03:34.129523  188266 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:03:34.129554  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:03:34.129622  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:03:34.167751  188266 cri.go:89] found id: "89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:34.167781  188266 cri.go:89] found id: ""
	I0731 21:03:34.167792  188266 logs.go:276] 1 containers: [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718]
	I0731 21:03:34.167860  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.172786  188266 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:03:34.172858  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:03:34.212172  188266 cri.go:89] found id: "d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:34.212204  188266 cri.go:89] found id: ""
	I0731 21:03:34.212215  188266 logs.go:276] 1 containers: [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c]
	I0731 21:03:34.212289  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.216651  188266 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:03:34.216736  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:03:34.263492  188266 cri.go:89] found id: "987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:34.263515  188266 cri.go:89] found id: ""
	I0731 21:03:34.263528  188266 logs.go:276] 1 containers: [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025]
	I0731 21:03:34.263592  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.268548  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:03:34.268630  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:03:34.309420  188266 cri.go:89] found id: "936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:34.309453  188266 cri.go:89] found id: ""
	I0731 21:03:34.309463  188266 logs.go:276] 1 containers: [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447]
	I0731 21:03:34.309529  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.313921  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:03:34.313993  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:03:34.354712  188266 cri.go:89] found id: "c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:34.354740  188266 cri.go:89] found id: ""
	I0731 21:03:34.354754  188266 logs.go:276] 1 containers: [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e]
	I0731 21:03:34.354818  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.359363  188266 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:03:34.359446  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:03:34.397596  188266 cri.go:89] found id: "c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:34.397622  188266 cri.go:89] found id: ""
	I0731 21:03:34.397634  188266 logs.go:276] 1 containers: [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085]
	I0731 21:03:34.397710  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.402126  188266 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:03:34.402207  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:03:34.447198  188266 cri.go:89] found id: ""
	I0731 21:03:34.447234  188266 logs.go:276] 0 containers: []
	W0731 21:03:34.447242  188266 logs.go:278] No container was found matching "kindnet"
	I0731 21:03:34.447248  188266 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:03:34.447304  188266 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:03:34.487429  188266 cri.go:89] found id: "701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:34.487452  188266 cri.go:89] found id: "23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:34.487457  188266 cri.go:89] found id: ""
	I0731 21:03:34.487464  188266 logs.go:276] 2 containers: [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f]
	I0731 21:03:34.487519  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.494362  188266 ssh_runner.go:195] Run: which crictl
	I0731 21:03:34.499409  188266 logs.go:123] Gathering logs for etcd [d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c] ...
	I0731 21:03:34.499438  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d53e71d03f5231823eef5aa2c168c53405220f32849cf4e4292e10ca68485f1c"
	I0731 21:03:34.549761  188266 logs.go:123] Gathering logs for kube-proxy [c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e] ...
	I0731 21:03:34.549802  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c749bf9fffde862085ddc1f36dab3f403bf7f01dcb212b39ba120c01b4bb1d5e"
	I0731 21:03:34.588571  188266 logs.go:123] Gathering logs for kube-controller-manager [c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085] ...
	I0731 21:03:34.588603  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c578f56929d8452d6b076ec0cda3748fd3a99b21ac94ece681665269ff2d8085"
	I0731 21:03:34.646590  188266 logs.go:123] Gathering logs for storage-provisioner [23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f] ...
	I0731 21:03:34.646635  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23b4eaaeaafcc66d14675a3a6e4b312646bf01fa48c14ca808696c5ead3a0a2f"
	I0731 21:03:34.691320  188266 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:03:34.691353  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:03:35.098975  188266 logs.go:123] Gathering logs for kubelet ...
	I0731 21:03:35.099018  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:03:35.153924  188266 logs.go:123] Gathering logs for dmesg ...
	I0731 21:03:35.153964  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:03:35.168091  188266 logs.go:123] Gathering logs for kube-apiserver [89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718] ...
	I0731 21:03:35.168121  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89c6731c9919d2a03e8795aa7ad5e3f740b05a2e18a3d797ecc52c981188e718"
	I0731 21:03:35.214469  188266 logs.go:123] Gathering logs for container status ...
	I0731 21:03:35.214511  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:03:35.260694  188266 logs.go:123] Gathering logs for storage-provisioner [701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173] ...
	I0731 21:03:35.260724  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701883982e5a7e68bf019d9b53061873882cc2bd0ede74c384d87d9456e28173"
	I0731 21:03:35.299230  188266 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:03:35.299261  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:03:35.413598  188266 logs.go:123] Gathering logs for coredns [987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025] ...
	I0731 21:03:35.413635  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 987b733bb2bf1a7660fb157ae04b5c0ba8d9bbd4e1e147b6e4ce40f3323c1025"
	I0731 21:03:35.451331  188266 logs.go:123] Gathering logs for kube-scheduler [936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447] ...
	I0731 21:03:35.451359  188266 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 936fe16f8f4b10b3bc2662d728c21508942048e809df9d1948de5f67f4e46447"
	I0731 21:03:35.582896  188656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:03:35.597483  188656 kubeadm.go:597] duration metric: took 4m3.860422558s to restartPrimaryControlPlane
	W0731 21:03:35.597559  188656 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 21:03:35.597598  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:03:36.054326  188656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:03:36.070199  188656 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:03:36.081882  188656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:03:36.093300  188656 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:03:36.093322  188656 kubeadm.go:157] found existing configuration files:
	
	I0731 21:03:36.093396  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:03:36.103781  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:03:36.103843  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:03:36.114702  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:03:36.125213  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:03:36.125299  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:03:36.136299  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:03:36.146441  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:03:36.146520  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:03:36.157524  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:03:36.168247  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:03:36.168327  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:03:36.178875  188656 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:03:36.253662  188656 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 21:03:36.253804  188656 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:03:36.401385  188656 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:03:36.401550  188656 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:03:36.401686  188656 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:03:36.591601  188656 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:03:34.513632  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:36.515043  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:36.593492  188656 out.go:204]   - Generating certificates and keys ...
	I0731 21:03:36.593604  188656 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:03:36.593690  188656 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:03:36.593817  188656 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:03:36.593907  188656 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:03:36.594011  188656 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:03:36.594090  188656 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:03:36.594215  188656 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:03:36.594602  188656 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:03:36.595122  188656 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:03:36.595323  188656 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:03:36.595414  188656 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:03:36.595548  188656 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:03:37.052958  188656 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:03:37.178980  188656 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:03:37.375085  188656 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:03:37.550735  188656 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:03:37.571991  188656 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:03:37.575050  188656 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:03:37.575227  188656 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:03:37.707194  188656 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:03:37.997696  188266 system_pods.go:59] 8 kube-system pods found
	I0731 21:03:37.997725  188266 system_pods.go:61] "coredns-7db6d8ff4d-gnrgs" [203ddf96-11cf-4fd3-8920-aa787815ad1a] Running
	I0731 21:03:37.997730  188266 system_pods.go:61] "etcd-default-k8s-diff-port-125614" [7b9a74f5-b8df-457d-b4be-26e3f268e74a] Running
	I0731 21:03:37.997734  188266 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-125614" [a2db9a6a-1d7f-4bb6-9f54-2d74676bba7f] Running
	I0731 21:03:37.997738  188266 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-125614" [3cd52038-e450-46ea-91a7-494bdfeb386e] Running
	I0731 21:03:37.997741  188266 system_pods.go:61] "kube-proxy-csdc4" [24077c7d-f54c-4a54-9791-742327f2a9d0] Running
	I0731 21:03:37.997744  188266 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-125614" [4b9b4f87-74a3-4768-b3ca-3226a4db7105] Running
	I0731 21:03:37.997750  188266 system_pods.go:61] "metrics-server-569cc877fc-jf52w" [00b07830-8180-43c0-83c7-e68d399ae0ef] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:03:37.997754  188266 system_pods.go:61] "storage-provisioner" [efc60c19-af1b-426e-82e2-5fb9a2d1fb3a] Running
	I0731 21:03:37.997762  188266 system_pods.go:74] duration metric: took 3.868231958s to wait for pod list to return data ...
	I0731 21:03:37.997773  188266 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:03:38.000640  188266 default_sa.go:45] found service account: "default"
	I0731 21:03:38.000665  188266 default_sa.go:55] duration metric: took 2.88647ms for default service account to be created ...
	I0731 21:03:38.000672  188266 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:03:38.007107  188266 system_pods.go:86] 8 kube-system pods found
	I0731 21:03:38.007132  188266 system_pods.go:89] "coredns-7db6d8ff4d-gnrgs" [203ddf96-11cf-4fd3-8920-aa787815ad1a] Running
	I0731 21:03:38.007137  188266 system_pods.go:89] "etcd-default-k8s-diff-port-125614" [7b9a74f5-b8df-457d-b4be-26e3f268e74a] Running
	I0731 21:03:38.007142  188266 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-125614" [a2db9a6a-1d7f-4bb6-9f54-2d74676bba7f] Running
	I0731 21:03:38.007146  188266 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-125614" [3cd52038-e450-46ea-91a7-494bdfeb386e] Running
	I0731 21:03:38.007152  188266 system_pods.go:89] "kube-proxy-csdc4" [24077c7d-f54c-4a54-9791-742327f2a9d0] Running
	I0731 21:03:38.007158  188266 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-125614" [4b9b4f87-74a3-4768-b3ca-3226a4db7105] Running
	I0731 21:03:38.007164  188266 system_pods.go:89] "metrics-server-569cc877fc-jf52w" [00b07830-8180-43c0-83c7-e68d399ae0ef] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:03:38.007168  188266 system_pods.go:89] "storage-provisioner" [efc60c19-af1b-426e-82e2-5fb9a2d1fb3a] Running
	I0731 21:03:38.007175  188266 system_pods.go:126] duration metric: took 6.498733ms to wait for k8s-apps to be running ...
	I0731 21:03:38.007183  188266 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:03:38.007240  188266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:03:38.026906  188266 system_svc.go:56] duration metric: took 19.708653ms WaitForService to wait for kubelet
	I0731 21:03:38.026938  188266 kubeadm.go:582] duration metric: took 4m21.965767608s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:03:38.026969  188266 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:03:38.030479  188266 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:03:38.030554  188266 node_conditions.go:123] node cpu capacity is 2
	I0731 21:03:38.030577  188266 node_conditions.go:105] duration metric: took 3.601933ms to run NodePressure ...
	I0731 21:03:38.030600  188266 start.go:241] waiting for startup goroutines ...
	I0731 21:03:38.030611  188266 start.go:246] waiting for cluster config update ...
	I0731 21:03:38.030626  188266 start.go:255] writing updated cluster config ...
	I0731 21:03:38.031028  188266 ssh_runner.go:195] Run: rm -f paused
	I0731 21:03:38.082629  188266 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 21:03:38.084590  188266 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-125614" cluster and "default" namespace by default
	I0731 21:03:37.709295  188656 out.go:204]   - Booting up control plane ...
	I0731 21:03:37.709427  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:03:37.722549  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:03:37.723455  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:03:37.724194  188656 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:03:37.726323  188656 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 21:03:39.013773  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:41.016158  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:44.360883  188133 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.27764632s)
	I0731 21:03:44.360955  188133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:03:44.379069  188133 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:03:44.389518  188133 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:03:44.400223  188133 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:03:44.400250  188133 kubeadm.go:157] found existing configuration files:
	
	I0731 21:03:44.400302  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:03:44.410644  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:03:44.410718  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:03:44.421136  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:03:44.431161  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:03:44.431231  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:03:44.441936  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:03:44.451761  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:03:44.451820  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:03:44.462692  188133 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:03:44.472982  188133 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:03:44.473050  188133 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:03:44.482980  188133 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:03:44.532539  188133 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0731 21:03:44.532637  188133 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:03:44.651505  188133 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:03:44.651654  188133 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:03:44.651772  188133 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0731 21:03:44.660564  188133 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:03:44.662559  188133 out.go:204]   - Generating certificates and keys ...
	I0731 21:03:44.662676  188133 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:03:44.662765  188133 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:03:44.662878  188133 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:03:44.662971  188133 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:03:44.663073  188133 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:03:44.663142  188133 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:03:44.663218  188133 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:03:44.663293  188133 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:03:44.663389  188133 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:03:44.663527  188133 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:03:44.663587  188133 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:03:44.663679  188133 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:03:44.813556  188133 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:03:44.908380  188133 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 21:03:45.005215  188133 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:03:45.138446  188133 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:03:45.222892  188133 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:03:45.223622  188133 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:03:45.226748  188133 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:03:43.513039  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:45.513901  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:45.228799  188133 out.go:204]   - Booting up control plane ...
	I0731 21:03:45.228934  188133 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:03:45.229087  188133 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:03:45.230021  188133 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:03:45.249145  188133 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:03:45.258184  188133 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:03:45.258267  188133 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:03:45.392726  188133 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 21:03:45.392852  188133 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 21:03:45.899754  188133 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 506.694095ms
	I0731 21:03:45.899857  188133 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 21:03:51.901713  188133 kubeadm.go:310] [api-check] The API server is healthy after 6.00194457s
	I0731 21:03:51.914947  188133 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 21:03:51.932510  188133 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 21:03:51.971055  188133 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 21:03:51.971273  188133 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-916885 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 21:03:51.985104  188133 kubeadm.go:310] [bootstrap-token] Using token: q86dx8.9ipyjyidvcwogxce
	I0731 21:03:47.515248  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:50.016206  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:51.986447  188133 out.go:204]   - Configuring RBAC rules ...
	I0731 21:03:51.986576  188133 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 21:03:51.993910  188133 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 21:03:52.002474  188133 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 21:03:52.007035  188133 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 21:03:52.011708  188133 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 21:03:52.020500  188133 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 21:03:52.310057  188133 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 21:03:52.778266  188133 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 21:03:53.308425  188133 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 21:03:53.309509  188133 kubeadm.go:310] 
	I0731 21:03:53.309585  188133 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 21:03:53.309597  188133 kubeadm.go:310] 
	I0731 21:03:53.309686  188133 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 21:03:53.309694  188133 kubeadm.go:310] 
	I0731 21:03:53.309715  188133 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 21:03:53.309771  188133 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 21:03:53.309875  188133 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 21:03:53.309894  188133 kubeadm.go:310] 
	I0731 21:03:53.310007  188133 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 21:03:53.310027  188133 kubeadm.go:310] 
	I0731 21:03:53.310088  188133 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 21:03:53.310099  188133 kubeadm.go:310] 
	I0731 21:03:53.310164  188133 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 21:03:53.310275  188133 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 21:03:53.310371  188133 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 21:03:53.310396  188133 kubeadm.go:310] 
	I0731 21:03:53.310499  188133 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 21:03:53.310601  188133 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 21:03:53.310611  188133 kubeadm.go:310] 
	I0731 21:03:53.310735  188133 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token q86dx8.9ipyjyidvcwogxce \
	I0731 21:03:53.310910  188133 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:227a67c5218b331dd5f7132ea28d97c9a8049ca33dc9adbb2077964e102f3559 \
	I0731 21:03:53.310961  188133 kubeadm.go:310] 	--control-plane 
	I0731 21:03:53.310970  188133 kubeadm.go:310] 
	I0731 21:03:53.311078  188133 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 21:03:53.311092  188133 kubeadm.go:310] 
	I0731 21:03:53.311222  188133 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token q86dx8.9ipyjyidvcwogxce \
	I0731 21:03:53.311402  188133 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:227a67c5218b331dd5f7132ea28d97c9a8049ca33dc9adbb2077964e102f3559 
	I0731 21:03:53.312409  188133 kubeadm.go:310] W0731 21:03:44.497219    2941 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0731 21:03:53.312703  188133 kubeadm.go:310] W0731 21:03:44.498106    2941 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0731 21:03:53.312811  188133 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:03:53.312857  188133 cni.go:84] Creating CNI manager for ""
	I0731 21:03:53.312870  188133 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:03:53.315035  188133 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:03:53.316406  188133 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:03:53.327870  188133 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:03:53.352757  188133 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:03:53.352902  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:53.352919  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-916885 minikube.k8s.io/updated_at=2024_07_31T21_03_53_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825 minikube.k8s.io/name=no-preload-916885 minikube.k8s.io/primary=true
	I0731 21:03:53.403275  188133 ops.go:34] apiserver oom_adj: -16
	I0731 21:03:53.579520  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:54.080457  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:54.579898  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:55.080464  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:55.580211  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:56.080518  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:56.579806  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:57.080302  188133 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:03:57.181987  188133 kubeadm.go:1113] duration metric: took 3.829153755s to wait for elevateKubeSystemPrivileges
	I0731 21:03:57.182024  188133 kubeadm.go:394] duration metric: took 4m59.623631766s to StartCluster
	I0731 21:03:57.182051  188133 settings.go:142] acquiring lock: {Name:mk1f43113b3e416d694056e4890ac819b020378c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:03:57.182160  188133 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 21:03:57.185297  188133 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/kubeconfig: {Name:mk5a842e5726f88c0dbd3bb38945f3fc4fe3008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:03:57.185586  188133 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.239 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:03:57.185672  188133 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 21:03:57.185753  188133 addons.go:69] Setting storage-provisioner=true in profile "no-preload-916885"
	I0731 21:03:57.185776  188133 addons.go:69] Setting default-storageclass=true in profile "no-preload-916885"
	I0731 21:03:57.185797  188133 addons.go:69] Setting metrics-server=true in profile "no-preload-916885"
	I0731 21:03:57.185825  188133 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-916885"
	I0731 21:03:57.185844  188133 addons.go:234] Setting addon metrics-server=true in "no-preload-916885"
	W0731 21:03:57.185856  188133 addons.go:243] addon metrics-server should already be in state true
	I0731 21:03:57.185864  188133 config.go:182] Loaded profile config "no-preload-916885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 21:03:57.185889  188133 host.go:66] Checking if "no-preload-916885" exists ...
	I0731 21:03:57.185785  188133 addons.go:234] Setting addon storage-provisioner=true in "no-preload-916885"
	W0731 21:03:57.185926  188133 addons.go:243] addon storage-provisioner should already be in state true
	I0731 21:03:57.185956  188133 host.go:66] Checking if "no-preload-916885" exists ...
	I0731 21:03:57.186201  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.186226  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.186247  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.186279  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.186301  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.186345  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.187280  188133 out.go:177] * Verifying Kubernetes components...
	I0731 21:03:57.188864  188133 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:03:57.202393  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35433
	I0731 21:03:57.202431  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41921
	I0731 21:03:57.202856  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.202946  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.203416  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.203434  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.203688  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.203707  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.203829  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.204081  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.204270  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetState
	I0731 21:03:57.204428  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.204462  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.204960  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39305
	I0731 21:03:57.205722  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.206275  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.206291  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.208245  188133 addons.go:234] Setting addon default-storageclass=true in "no-preload-916885"
	W0731 21:03:57.208264  188133 addons.go:243] addon default-storageclass should already be in state true
	I0731 21:03:57.208296  188133 host.go:66] Checking if "no-preload-916885" exists ...
	I0731 21:03:57.208640  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.208663  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.208866  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.209432  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.209458  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.222235  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44595
	I0731 21:03:57.222835  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.223408  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.223429  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.224137  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.224366  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetState
	I0731 21:03:57.226564  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 21:03:57.227398  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
	I0731 21:03:57.227842  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.228377  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.228399  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.228427  188133 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:03:57.228836  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.229521  188133 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19355-121704/.minikube/bin/docker-machine-driver-kvm2
	I0731 21:03:57.229573  188133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:03:57.230036  188133 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:03:57.230056  188133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:03:57.230075  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 21:03:57.230207  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46451
	I0731 21:03:57.230601  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.230993  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.231008  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.231323  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.231519  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetState
	I0731 21:03:57.233542  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 21:03:57.235239  188133 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0731 21:03:52.514632  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:55.014017  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:57.235631  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.236081  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 21:03:57.236105  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.236374  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 21:03:57.236478  188133 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:03:57.236493  188133 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:03:57.236510  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 21:03:57.236545  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 21:03:57.236711  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 21:03:57.236824  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 21:03:57.238988  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.239335  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 21:03:57.239361  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.239482  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 21:03:57.239645  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 21:03:57.239775  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 21:03:57.239902  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 21:03:57.252386  188133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40865
	I0731 21:03:57.252846  188133 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:03:57.253454  188133 main.go:141] libmachine: Using API Version  1
	I0731 21:03:57.253474  188133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:03:57.253837  188133 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:03:57.254048  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetState
	I0731 21:03:57.255784  188133 main.go:141] libmachine: (no-preload-916885) Calling .DriverName
	I0731 21:03:57.256020  188133 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:03:57.256037  188133 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:03:57.256057  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHHostname
	I0731 21:03:57.258870  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.259220  188133 main.go:141] libmachine: (no-preload-916885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b1:6a", ip: ""} in network mk-no-preload-916885: {Iface:virbr4 ExpiryTime:2024-07-31 21:58:30 +0000 UTC Type:0 Mac:52:54:00:46:b1:6a Iaid: IPaddr:192.168.72.239 Prefix:24 Hostname:no-preload-916885 Clientid:01:52:54:00:46:b1:6a}
	I0731 21:03:57.259254  188133 main.go:141] libmachine: (no-preload-916885) DBG | domain no-preload-916885 has defined IP address 192.168.72.239 and MAC address 52:54:00:46:b1:6a in network mk-no-preload-916885
	I0731 21:03:57.259446  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHPort
	I0731 21:03:57.259612  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHKeyPath
	I0731 21:03:57.259783  188133 main.go:141] libmachine: (no-preload-916885) Calling .GetSSHUsername
	I0731 21:03:57.259940  188133 sshutil.go:53] new ssh client: &{IP:192.168.72.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/no-preload-916885/id_rsa Username:docker}
	I0731 21:03:57.405243  188133 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:03:57.426852  188133 node_ready.go:35] waiting up to 6m0s for node "no-preload-916885" to be "Ready" ...
	I0731 21:03:57.494325  188133 node_ready.go:49] node "no-preload-916885" has status "Ready":"True"
	I0731 21:03:57.494352  188133 node_ready.go:38] duration metric: took 67.471516ms for node "no-preload-916885" to be "Ready" ...
	I0731 21:03:57.494365  188133 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:03:57.497819  188133 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:03:57.497849  188133 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0731 21:03:57.528118  188133 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:03:57.528148  188133 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:03:57.557889  188133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:03:57.568872  188133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:03:57.583099  188133 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-9qnjq" in "kube-system" namespace to be "Ready" ...
	I0731 21:03:57.587315  188133 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:03:57.587342  188133 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:03:57.645504  188133 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:03:58.515635  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.515650  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.515667  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.515675  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.516054  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.516100  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.516117  188133 main.go:141] libmachine: (no-preload-916885) DBG | Closing plugin on server side
	I0731 21:03:58.516128  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.516128  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.516161  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.516187  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.516141  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.516213  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.516097  188133 main.go:141] libmachine: (no-preload-916885) DBG | Closing plugin on server side
	I0731 21:03:58.516431  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.516444  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.517889  188133 main.go:141] libmachine: (no-preload-916885) DBG | Closing plugin on server side
	I0731 21:03:58.517914  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.517930  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.569097  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.569120  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.569463  188133 main.go:141] libmachine: (no-preload-916885) DBG | Closing plugin on server side
	I0731 21:03:58.569511  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.569520  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.726076  188133 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.080526254s)
	I0731 21:03:58.726140  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.726153  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.726469  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.726490  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.726501  188133 main.go:141] libmachine: Making call to close driver server
	I0731 21:03:58.726514  188133 main.go:141] libmachine: (no-preload-916885) Calling .Close
	I0731 21:03:58.728603  188133 main.go:141] libmachine: (no-preload-916885) DBG | Closing plugin on server side
	I0731 21:03:58.728666  188133 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:03:58.728688  188133 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:03:58.728715  188133 addons.go:475] Verifying addon metrics-server=true in "no-preload-916885"
	I0731 21:03:58.730520  188133 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0731 21:03:58.731823  188133 addons.go:510] duration metric: took 1.546157188s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0731 21:03:57.515366  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:59.515730  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:02.013803  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:03:59.593082  188133 pod_ready.go:102] pod "coredns-5cfdc65f69-9qnjq" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:00.589165  188133 pod_ready.go:92] pod "coredns-5cfdc65f69-9qnjq" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:00.589192  188133 pod_ready.go:81] duration metric: took 3.00606369s for pod "coredns-5cfdc65f69-9qnjq" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:00.589204  188133 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-bqgfg" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:02.597316  188133 pod_ready.go:102] pod "coredns-5cfdc65f69-bqgfg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:05.096168  188133 pod_ready.go:102] pod "coredns-5cfdc65f69-bqgfg" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:05.597832  188133 pod_ready.go:92] pod "coredns-5cfdc65f69-bqgfg" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.597857  188133 pod_ready.go:81] duration metric: took 5.008646335s for pod "coredns-5cfdc65f69-bqgfg" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.597866  188133 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.603105  188133 pod_ready.go:92] pod "etcd-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.603128  188133 pod_ready.go:81] duration metric: took 5.254251ms for pod "etcd-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.603140  188133 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.610748  188133 pod_ready.go:92] pod "kube-apiserver-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.610771  188133 pod_ready.go:81] duration metric: took 7.623438ms for pod "kube-apiserver-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.610782  188133 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.615949  188133 pod_ready.go:92] pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.615966  188133 pod_ready.go:81] duration metric: took 5.176213ms for pod "kube-controller-manager-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.615975  188133 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b4h2z" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.620431  188133 pod_ready.go:92] pod "kube-proxy-b4h2z" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.620450  188133 pod_ready.go:81] duration metric: took 4.469258ms for pod "kube-proxy-b4h2z" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.620458  188133 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.993080  188133 pod_ready.go:92] pod "kube-scheduler-no-preload-916885" in "kube-system" namespace has status "Ready":"True"
	I0731 21:04:05.993104  188133 pod_ready.go:81] duration metric: took 372.640001ms for pod "kube-scheduler-no-preload-916885" in "kube-system" namespace to be "Ready" ...
	I0731 21:04:05.993112  188133 pod_ready.go:38] duration metric: took 8.498733061s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:04:05.993125  188133 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:04:05.993186  188133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:04:06.009952  188133 api_server.go:72] duration metric: took 8.824325154s to wait for apiserver process to appear ...
	I0731 21:04:06.009981  188133 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:04:06.010001  188133 api_server.go:253] Checking apiserver healthz at https://192.168.72.239:8443/healthz ...
	I0731 21:04:06.014715  188133 api_server.go:279] https://192.168.72.239:8443/healthz returned 200:
	ok
	I0731 21:04:06.015917  188133 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 21:04:06.015944  188133 api_server.go:131] duration metric: took 5.952931ms to wait for apiserver health ...
	I0731 21:04:06.015954  188133 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:04:06.196874  188133 system_pods.go:59] 9 kube-system pods found
	I0731 21:04:06.196907  188133 system_pods.go:61] "coredns-5cfdc65f69-9qnjq" [2350f15d-0e3d-429f-a21f-8cbd41407d7e] Running
	I0731 21:04:06.196914  188133 system_pods.go:61] "coredns-5cfdc65f69-bqgfg" [9010990b-36d5-4c0d-adc9-5d9483bd5d44] Running
	I0731 21:04:06.196918  188133 system_pods.go:61] "etcd-no-preload-916885" [951e730b-b153-4f75-9f7f-82d774e01853] Running
	I0731 21:04:06.196923  188133 system_pods.go:61] "kube-apiserver-no-preload-916885" [c53d3e94-2b2d-4ad5-a0a2-54c519a4c907] Running
	I0731 21:04:06.196929  188133 system_pods.go:61] "kube-controller-manager-no-preload-916885" [8de7eaf4-d6e7-41dc-a206-645821682ab2] Running
	I0731 21:04:06.196933  188133 system_pods.go:61] "kube-proxy-b4h2z" [328ebd98-accf-43da-ae60-40fc93f34116] Running
	I0731 21:04:06.196938  188133 system_pods.go:61] "kube-scheduler-no-preload-916885" [e6d18e4c-8e0d-4332-8fc3-2696261447ac] Running
	I0731 21:04:06.196945  188133 system_pods.go:61] "metrics-server-78fcd8795b-86m8h" [3c4df12a-3d52-48dc-9998-587565d13dca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:04:06.196950  188133 system_pods.go:61] "storage-provisioner" [6bfc781b-1370-4460-8018-a1279e37b39d] Running
	I0731 21:04:06.196960  188133 system_pods.go:74] duration metric: took 180.999269ms to wait for pod list to return data ...
	I0731 21:04:06.196970  188133 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:04:06.394499  188133 default_sa.go:45] found service account: "default"
	I0731 21:04:06.394530  188133 default_sa.go:55] duration metric: took 197.552628ms for default service account to be created ...
	I0731 21:04:06.394539  188133 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:04:06.598314  188133 system_pods.go:86] 9 kube-system pods found
	I0731 21:04:06.598345  188133 system_pods.go:89] "coredns-5cfdc65f69-9qnjq" [2350f15d-0e3d-429f-a21f-8cbd41407d7e] Running
	I0731 21:04:06.598354  188133 system_pods.go:89] "coredns-5cfdc65f69-bqgfg" [9010990b-36d5-4c0d-adc9-5d9483bd5d44] Running
	I0731 21:04:06.598361  188133 system_pods.go:89] "etcd-no-preload-916885" [951e730b-b153-4f75-9f7f-82d774e01853] Running
	I0731 21:04:06.598370  188133 system_pods.go:89] "kube-apiserver-no-preload-916885" [c53d3e94-2b2d-4ad5-a0a2-54c519a4c907] Running
	I0731 21:04:06.598376  188133 system_pods.go:89] "kube-controller-manager-no-preload-916885" [8de7eaf4-d6e7-41dc-a206-645821682ab2] Running
	I0731 21:04:06.598389  188133 system_pods.go:89] "kube-proxy-b4h2z" [328ebd98-accf-43da-ae60-40fc93f34116] Running
	I0731 21:04:06.598397  188133 system_pods.go:89] "kube-scheduler-no-preload-916885" [e6d18e4c-8e0d-4332-8fc3-2696261447ac] Running
	I0731 21:04:06.598408  188133 system_pods.go:89] "metrics-server-78fcd8795b-86m8h" [3c4df12a-3d52-48dc-9998-587565d13dca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:04:06.598419  188133 system_pods.go:89] "storage-provisioner" [6bfc781b-1370-4460-8018-a1279e37b39d] Running
	I0731 21:04:06.598430  188133 system_pods.go:126] duration metric: took 203.884264ms to wait for k8s-apps to be running ...
	I0731 21:04:06.598442  188133 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:04:06.598498  188133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:04:06.613642  188133 system_svc.go:56] duration metric: took 15.190132ms WaitForService to wait for kubelet
	I0731 21:04:06.613675  188133 kubeadm.go:582] duration metric: took 9.4280531s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:04:06.613705  188133 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:04:06.794163  188133 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:04:06.794191  188133 node_conditions.go:123] node cpu capacity is 2
	I0731 21:04:06.794204  188133 node_conditions.go:105] duration metric: took 180.492992ms to run NodePressure ...
	I0731 21:04:06.794218  188133 start.go:241] waiting for startup goroutines ...
	I0731 21:04:06.794227  188133 start.go:246] waiting for cluster config update ...
	I0731 21:04:06.794239  188133 start.go:255] writing updated cluster config ...
	I0731 21:04:06.794547  188133 ssh_runner.go:195] Run: rm -f paused
	I0731 21:04:06.844118  188133 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0731 21:04:06.846234  188133 out.go:177] * Done! kubectl is now configured to use "no-preload-916885" cluster and "default" namespace by default
	I0731 21:04:04.015079  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:06.514907  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:08.514958  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:11.014341  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:13.514956  187862 pod_ready.go:102] pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace has status "Ready":"False"
	I0731 21:04:14.014985  187862 pod_ready.go:81] duration metric: took 4m0.007784922s for pod "metrics-server-569cc877fc-slbkm" in "kube-system" namespace to be "Ready" ...
	E0731 21:04:14.015013  187862 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0731 21:04:14.015020  187862 pod_ready.go:38] duration metric: took 4m6.056814749s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:04:14.015034  187862 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:04:14.015079  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:04:14.015127  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:04:14.086254  187862 cri.go:89] found id: "dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:14.086283  187862 cri.go:89] found id: ""
	I0731 21:04:14.086293  187862 logs.go:276] 1 containers: [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473]
	I0731 21:04:14.086368  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.091267  187862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:04:14.091334  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:04:14.138577  187862 cri.go:89] found id: "7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:14.138613  187862 cri.go:89] found id: ""
	I0731 21:04:14.138624  187862 logs.go:276] 1 containers: [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e]
	I0731 21:04:14.138696  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.143245  187862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:04:14.143315  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:04:14.182295  187862 cri.go:89] found id: "1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:14.182325  187862 cri.go:89] found id: ""
	I0731 21:04:14.182336  187862 logs.go:276] 1 containers: [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084]
	I0731 21:04:14.182400  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.186861  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:04:14.186936  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:04:14.230524  187862 cri.go:89] found id: "3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:14.230547  187862 cri.go:89] found id: ""
	I0731 21:04:14.230555  187862 logs.go:276] 1 containers: [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5]
	I0731 21:04:14.230609  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.235285  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:04:14.235354  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:04:14.279188  187862 cri.go:89] found id: "b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:14.279209  187862 cri.go:89] found id: ""
	I0731 21:04:14.279217  187862 logs.go:276] 1 containers: [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845]
	I0731 21:04:14.279268  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.284280  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:04:14.284362  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:04:14.333736  187862 cri.go:89] found id: "0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:14.333764  187862 cri.go:89] found id: ""
	I0731 21:04:14.333774  187862 logs.go:276] 1 containers: [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f]
	I0731 21:04:14.333830  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.338652  187862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:04:14.338717  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:04:14.380632  187862 cri.go:89] found id: ""
	I0731 21:04:14.380663  187862 logs.go:276] 0 containers: []
	W0731 21:04:14.380672  187862 logs.go:278] No container was found matching "kindnet"
	I0731 21:04:14.380678  187862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:04:14.380747  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:04:14.424705  187862 cri.go:89] found id: "919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:14.424727  187862 cri.go:89] found id: "c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:14.424732  187862 cri.go:89] found id: ""
	I0731 21:04:14.424741  187862 logs.go:276] 2 containers: [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2]
	I0731 21:04:14.424801  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.429310  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:14.434243  187862 logs.go:123] Gathering logs for kubelet ...
	I0731 21:04:14.434267  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:04:14.490743  187862 logs.go:123] Gathering logs for etcd [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e] ...
	I0731 21:04:14.490782  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:14.536575  187862 logs.go:123] Gathering logs for coredns [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084] ...
	I0731 21:04:14.536613  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:14.585952  187862 logs.go:123] Gathering logs for kube-proxy [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845] ...
	I0731 21:04:14.585986  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:14.626198  187862 logs.go:123] Gathering logs for container status ...
	I0731 21:04:14.626228  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:04:14.672674  187862 logs.go:123] Gathering logs for storage-provisioner [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb] ...
	I0731 21:04:14.672712  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:14.711759  187862 logs.go:123] Gathering logs for storage-provisioner [c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2] ...
	I0731 21:04:14.711788  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:14.757020  187862 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:04:14.757047  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:04:15.286344  187862 logs.go:123] Gathering logs for dmesg ...
	I0731 21:04:15.286393  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:04:15.301933  187862 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:04:15.301969  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:04:15.451532  187862 logs.go:123] Gathering logs for kube-apiserver [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473] ...
	I0731 21:04:15.451566  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:15.502398  187862 logs.go:123] Gathering logs for kube-scheduler [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5] ...
	I0731 21:04:15.502443  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:15.544678  187862 logs.go:123] Gathering logs for kube-controller-manager [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f] ...
	I0731 21:04:15.544719  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:17.729291  188656 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 21:04:17.730290  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:04:17.730512  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:04:18.104050  187862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:04:18.121028  187862 api_server.go:72] duration metric: took 4m17.382743031s to wait for apiserver process to appear ...
	I0731 21:04:18.121061  187862 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:04:18.121109  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:04:18.121179  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:04:18.165472  187862 cri.go:89] found id: "dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:18.165498  187862 cri.go:89] found id: ""
	I0731 21:04:18.165507  187862 logs.go:276] 1 containers: [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473]
	I0731 21:04:18.165559  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.169592  187862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:04:18.169663  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:04:18.216918  187862 cri.go:89] found id: "7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:18.216942  187862 cri.go:89] found id: ""
	I0731 21:04:18.216951  187862 logs.go:276] 1 containers: [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e]
	I0731 21:04:18.217015  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.221467  187862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:04:18.221546  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:04:18.267066  187862 cri.go:89] found id: "1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:18.267089  187862 cri.go:89] found id: ""
	I0731 21:04:18.267098  187862 logs.go:276] 1 containers: [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084]
	I0731 21:04:18.267164  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.271583  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:04:18.271662  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:04:18.316381  187862 cri.go:89] found id: "3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:18.316404  187862 cri.go:89] found id: ""
	I0731 21:04:18.316412  187862 logs.go:276] 1 containers: [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5]
	I0731 21:04:18.316472  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.320859  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:04:18.320932  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:04:18.365366  187862 cri.go:89] found id: "b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:18.365396  187862 cri.go:89] found id: ""
	I0731 21:04:18.365410  187862 logs.go:276] 1 containers: [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845]
	I0731 21:04:18.365476  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.369933  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:04:18.370019  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:04:18.411121  187862 cri.go:89] found id: "0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:18.411143  187862 cri.go:89] found id: ""
	I0731 21:04:18.411152  187862 logs.go:276] 1 containers: [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f]
	I0731 21:04:18.411203  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.415493  187862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:04:18.415561  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:04:18.453040  187862 cri.go:89] found id: ""
	I0731 21:04:18.453069  187862 logs.go:276] 0 containers: []
	W0731 21:04:18.453078  187862 logs.go:278] No container was found matching "kindnet"
	I0731 21:04:18.453085  187862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:04:18.453153  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:04:18.499335  187862 cri.go:89] found id: "919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:18.499359  187862 cri.go:89] found id: "c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:18.499363  187862 cri.go:89] found id: ""
	I0731 21:04:18.499371  187862 logs.go:276] 2 containers: [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2]
	I0731 21:04:18.499446  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.504353  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:18.508619  187862 logs.go:123] Gathering logs for kubelet ...
	I0731 21:04:18.508640  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:04:18.562692  187862 logs.go:123] Gathering logs for kube-apiserver [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473] ...
	I0731 21:04:18.562732  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:18.623405  187862 logs.go:123] Gathering logs for coredns [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084] ...
	I0731 21:04:18.623446  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:18.679472  187862 logs.go:123] Gathering logs for kube-scheduler [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5] ...
	I0731 21:04:18.679510  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:18.728893  187862 logs.go:123] Gathering logs for storage-provisioner [c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2] ...
	I0731 21:04:18.728933  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:18.770963  187862 logs.go:123] Gathering logs for container status ...
	I0731 21:04:18.770994  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:04:18.819353  187862 logs.go:123] Gathering logs for dmesg ...
	I0731 21:04:18.819385  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:04:18.835654  187862 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:04:18.835684  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:04:18.947479  187862 logs.go:123] Gathering logs for etcd [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e] ...
	I0731 21:04:18.947516  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:18.995005  187862 logs.go:123] Gathering logs for kube-proxy [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845] ...
	I0731 21:04:18.995043  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:19.033246  187862 logs.go:123] Gathering logs for kube-controller-manager [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f] ...
	I0731 21:04:19.033274  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:19.092703  187862 logs.go:123] Gathering logs for storage-provisioner [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb] ...
	I0731 21:04:19.092740  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:19.129738  187862 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:04:19.129769  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:04:22.058935  187862 api_server.go:253] Checking apiserver healthz at https://192.168.39.92:8443/healthz ...
	I0731 21:04:22.063496  187862 api_server.go:279] https://192.168.39.92:8443/healthz returned 200:
	ok
	I0731 21:04:22.064670  187862 api_server.go:141] control plane version: v1.30.3
	I0731 21:04:22.064690  187862 api_server.go:131] duration metric: took 3.943623055s to wait for apiserver health ...
	I0731 21:04:22.064699  187862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 21:04:22.064721  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:04:22.064771  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:04:22.103710  187862 cri.go:89] found id: "dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:22.103733  187862 cri.go:89] found id: ""
	I0731 21:04:22.103741  187862 logs.go:276] 1 containers: [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473]
	I0731 21:04:22.103798  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.108133  187862 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:04:22.108203  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:04:22.159120  187862 cri.go:89] found id: "7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:22.159145  187862 cri.go:89] found id: ""
	I0731 21:04:22.159155  187862 logs.go:276] 1 containers: [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e]
	I0731 21:04:22.159213  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.165107  187862 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:04:22.165169  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:04:22.202426  187862 cri.go:89] found id: "1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:22.202454  187862 cri.go:89] found id: ""
	I0731 21:04:22.202464  187862 logs.go:276] 1 containers: [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084]
	I0731 21:04:22.202524  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.206785  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:04:22.206842  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:04:22.245008  187862 cri.go:89] found id: "3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:22.245039  187862 cri.go:89] found id: ""
	I0731 21:04:22.245050  187862 logs.go:276] 1 containers: [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5]
	I0731 21:04:22.245111  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.249467  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:04:22.249548  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:04:22.731353  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:04:22.731627  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:04:22.298105  187862 cri.go:89] found id: "b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:22.298135  187862 cri.go:89] found id: ""
	I0731 21:04:22.298145  187862 logs.go:276] 1 containers: [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845]
	I0731 21:04:22.298209  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.302845  187862 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:04:22.302902  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:04:22.346868  187862 cri.go:89] found id: "0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:22.346898  187862 cri.go:89] found id: ""
	I0731 21:04:22.346909  187862 logs.go:276] 1 containers: [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f]
	I0731 21:04:22.346978  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.351246  187862 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:04:22.351313  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:04:22.389698  187862 cri.go:89] found id: ""
	I0731 21:04:22.389730  187862 logs.go:276] 0 containers: []
	W0731 21:04:22.389742  187862 logs.go:278] No container was found matching "kindnet"
	I0731 21:04:22.389751  187862 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 21:04:22.389817  187862 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 21:04:22.425212  187862 cri.go:89] found id: "919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:22.425234  187862 cri.go:89] found id: "c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:22.425238  187862 cri.go:89] found id: ""
	I0731 21:04:22.425245  187862 logs.go:276] 2 containers: [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2]
	I0731 21:04:22.425298  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.429584  187862 ssh_runner.go:195] Run: which crictl
	I0731 21:04:22.433471  187862 logs.go:123] Gathering logs for kube-controller-manager [0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f] ...
	I0731 21:04:22.433496  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0854d075486b3b97c9b42e46ab26c32217cefd956bc3fb778e6ed85a0f55fc7f"
	I0731 21:04:22.490354  187862 logs.go:123] Gathering logs for storage-provisioner [919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb] ...
	I0731 21:04:22.490390  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 919f3cf1d058c5632c65522d56f81757fef48e5f798cea774f3199c8f18048eb"
	I0731 21:04:22.530117  187862 logs.go:123] Gathering logs for dmesg ...
	I0731 21:04:22.530146  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:04:22.545249  187862 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:04:22.545281  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:04:22.658074  187862 logs.go:123] Gathering logs for kube-apiserver [dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473] ...
	I0731 21:04:22.658115  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dafbb34397064c4f4b1d43a16162a6405b519a8b48e052986436b750fccd0473"
	I0731 21:04:22.711537  187862 logs.go:123] Gathering logs for etcd [7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e] ...
	I0731 21:04:22.711573  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7544698b6925d4a255cb9aef4dd8fddd9ddfd876e0d26545a9547749ca96198e"
	I0731 21:04:22.758644  187862 logs.go:123] Gathering logs for coredns [1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084] ...
	I0731 21:04:22.758685  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a7f319ba94b36d33be6b1f83fca3e0cb9a0ef68fcf4a3fb87005a97beaf4084"
	I0731 21:04:22.796716  187862 logs.go:123] Gathering logs for kube-scheduler [3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5] ...
	I0731 21:04:22.796751  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ac0d9edc6a97955f7ada5180bb436e83770a23a8702f675bc34e6c78f5d77f5"
	I0731 21:04:22.843502  187862 logs.go:123] Gathering logs for storage-provisioner [c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2] ...
	I0731 21:04:22.843538  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ca8e260d6f1232b28391f427a4a699c6295c9812b0ff46138ecd79459c01c2"
	I0731 21:04:22.881738  187862 logs.go:123] Gathering logs for kubelet ...
	I0731 21:04:22.881765  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:04:22.936317  187862 logs.go:123] Gathering logs for kube-proxy [b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845] ...
	I0731 21:04:22.936360  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b51b7e8b0ab34ad7f5ad5b90cd436a22194b926f9c72382bb5dbbdda715f7845"
	I0731 21:04:22.977562  187862 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:04:22.977592  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:04:23.354873  187862 logs.go:123] Gathering logs for container status ...
	I0731 21:04:23.354921  187862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:04:25.917553  187862 system_pods.go:59] 8 kube-system pods found
	I0731 21:04:25.917588  187862 system_pods.go:61] "coredns-7db6d8ff4d-2ks55" [f5ad9d76-5cdc-430e-8933-7e72a2dda95f] Running
	I0731 21:04:25.917593  187862 system_pods.go:61] "etcd-embed-certs-831240" [5236ad06-90d9-48f1-964a-efa8f56ee8b5] Running
	I0731 21:04:25.917597  187862 system_pods.go:61] "kube-apiserver-embed-certs-831240" [06290f48-0a7b-4d88-9e61-af4c7dde9acd] Running
	I0731 21:04:25.917601  187862 system_pods.go:61] "kube-controller-manager-embed-certs-831240" [5d669fab-16cd-4bc6-a880-a1ecfb5d55b2] Running
	I0731 21:04:25.917604  187862 system_pods.go:61] "kube-proxy-x662j" [9ad0d8a8-94b4-4f3e-b5da-4e5585c28d21] Running
	I0731 21:04:25.917608  187862 system_pods.go:61] "kube-scheduler-embed-certs-831240" [cf9c5922-6468-46e7-84c1-1841f5bc3446] Running
	I0731 21:04:25.917614  187862 system_pods.go:61] "metrics-server-569cc877fc-slbkm" [f93f674b-1f0e-443b-ac06-9c2a5234eeea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:04:25.917624  187862 system_pods.go:61] "storage-provisioner" [d3d5fa24-96e8-4ab5-9887-62ff8b82f21d] Running
	I0731 21:04:25.917635  187862 system_pods.go:74] duration metric: took 3.852929636s to wait for pod list to return data ...
	I0731 21:04:25.917649  187862 default_sa.go:34] waiting for default service account to be created ...
	I0731 21:04:25.920234  187862 default_sa.go:45] found service account: "default"
	I0731 21:04:25.920256  187862 default_sa.go:55] duration metric: took 2.600194ms for default service account to be created ...
	I0731 21:04:25.920264  187862 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 21:04:25.926296  187862 system_pods.go:86] 8 kube-system pods found
	I0731 21:04:25.926325  187862 system_pods.go:89] "coredns-7db6d8ff4d-2ks55" [f5ad9d76-5cdc-430e-8933-7e72a2dda95f] Running
	I0731 21:04:25.926330  187862 system_pods.go:89] "etcd-embed-certs-831240" [5236ad06-90d9-48f1-964a-efa8f56ee8b5] Running
	I0731 21:04:25.926334  187862 system_pods.go:89] "kube-apiserver-embed-certs-831240" [06290f48-0a7b-4d88-9e61-af4c7dde9acd] Running
	I0731 21:04:25.926338  187862 system_pods.go:89] "kube-controller-manager-embed-certs-831240" [5d669fab-16cd-4bc6-a880-a1ecfb5d55b2] Running
	I0731 21:04:25.926342  187862 system_pods.go:89] "kube-proxy-x662j" [9ad0d8a8-94b4-4f3e-b5da-4e5585c28d21] Running
	I0731 21:04:25.926346  187862 system_pods.go:89] "kube-scheduler-embed-certs-831240" [cf9c5922-6468-46e7-84c1-1841f5bc3446] Running
	I0731 21:04:25.926352  187862 system_pods.go:89] "metrics-server-569cc877fc-slbkm" [f93f674b-1f0e-443b-ac06-9c2a5234eeea] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0731 21:04:25.926356  187862 system_pods.go:89] "storage-provisioner" [d3d5fa24-96e8-4ab5-9887-62ff8b82f21d] Running
	I0731 21:04:25.926365  187862 system_pods.go:126] duration metric: took 6.094538ms to wait for k8s-apps to be running ...
	I0731 21:04:25.926373  187862 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 21:04:25.926433  187862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:04:25.945225  187862 system_svc.go:56] duration metric: took 18.837835ms WaitForService to wait for kubelet
	I0731 21:04:25.945264  187862 kubeadm.go:582] duration metric: took 4m25.206984451s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:04:25.945294  187862 node_conditions.go:102] verifying NodePressure condition ...
	I0731 21:04:25.948480  187862 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 21:04:25.948506  187862 node_conditions.go:123] node cpu capacity is 2
	I0731 21:04:25.948520  187862 node_conditions.go:105] duration metric: took 3.219175ms to run NodePressure ...
	I0731 21:04:25.948535  187862 start.go:241] waiting for startup goroutines ...
	I0731 21:04:25.948543  187862 start.go:246] waiting for cluster config update ...
	I0731 21:04:25.948556  187862 start.go:255] writing updated cluster config ...
	I0731 21:04:25.949317  187862 ssh_runner.go:195] Run: rm -f paused
	I0731 21:04:26.000525  187862 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 21:04:26.002719  187862 out.go:177] * Done! kubectl is now configured to use "embed-certs-831240" cluster and "default" namespace by default
	I0731 21:04:32.732572  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:04:32.732835  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:04:52.734257  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:04:52.734530  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:05:32.739465  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:05:32.739778  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:05:32.739796  188656 kubeadm.go:310] 
	I0731 21:05:32.739854  188656 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 21:05:32.739962  188656 kubeadm.go:310] 		timed out waiting for the condition
	I0731 21:05:32.739988  188656 kubeadm.go:310] 
	I0731 21:05:32.740034  188656 kubeadm.go:310] 	This error is likely caused by:
	I0731 21:05:32.740083  188656 kubeadm.go:310] 		- The kubelet is not running
	I0731 21:05:32.740230  188656 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 21:05:32.740245  188656 kubeadm.go:310] 
	I0731 21:05:32.740393  188656 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 21:05:32.740441  188656 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 21:05:32.740485  188656 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 21:05:32.740494  188656 kubeadm.go:310] 
	I0731 21:05:32.740624  188656 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 21:05:32.740741  188656 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 21:05:32.740752  188656 kubeadm.go:310] 
	I0731 21:05:32.740888  188656 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 21:05:32.741008  188656 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 21:05:32.741084  188656 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 21:05:32.741145  188656 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 21:05:32.741152  188656 kubeadm.go:310] 
	I0731 21:05:32.741834  188656 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:05:32.741967  188656 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 21:05:32.742066  188656 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0731 21:05:32.742264  188656 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0731 21:05:32.742340  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 21:05:33.227380  188656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 21:05:33.243864  188656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:05:33.254208  188656 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:05:33.254234  188656 kubeadm.go:157] found existing configuration files:
	
	I0731 21:05:33.254313  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:05:33.264766  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:05:33.264846  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:05:33.275517  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:05:33.286281  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:05:33.286358  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:05:33.297108  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:05:33.307555  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:05:33.307627  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:05:33.318193  188656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:05:33.328155  188656 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:05:33.328220  188656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:05:33.338088  188656 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:05:33.569897  188656 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:07:29.725230  188656 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 21:07:29.725381  188656 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 21:07:29.726868  188656 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 21:07:29.726959  188656 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:07:29.727064  188656 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:07:29.727204  188656 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:07:29.727322  188656 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:07:29.727389  188656 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:07:29.729525  188656 out.go:204]   - Generating certificates and keys ...
	I0731 21:07:29.729659  188656 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:07:29.729761  188656 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:07:29.729918  188656 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 21:07:29.730026  188656 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 21:07:29.730126  188656 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 21:07:29.730268  188656 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 21:07:29.730369  188656 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 21:07:29.730461  188656 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 21:07:29.730555  188656 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 21:07:29.730658  188656 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 21:07:29.730713  188656 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 21:07:29.730790  188656 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:07:29.730856  188656 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:07:29.730931  188656 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:07:29.731014  188656 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:07:29.731111  188656 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:07:29.731248  188656 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:07:29.731339  188656 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:07:29.731395  188656 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:07:29.731486  188656 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:07:29.733052  188656 out.go:204]   - Booting up control plane ...
	I0731 21:07:29.733146  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:07:29.733226  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:07:29.733305  188656 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:07:29.733454  188656 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:07:29.733656  188656 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 21:07:29.733735  188656 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 21:07:29.733830  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.734048  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.734116  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.734275  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.734331  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.734543  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.734642  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.734868  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.734966  188656 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 21:07:29.735234  188656 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 21:07:29.735252  188656 kubeadm.go:310] 
	I0731 21:07:29.735313  188656 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 21:07:29.735376  188656 kubeadm.go:310] 		timed out waiting for the condition
	I0731 21:07:29.735385  188656 kubeadm.go:310] 
	I0731 21:07:29.735432  188656 kubeadm.go:310] 	This error is likely caused by:
	I0731 21:07:29.735480  188656 kubeadm.go:310] 		- The kubelet is not running
	I0731 21:07:29.735624  188656 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 21:07:29.735634  188656 kubeadm.go:310] 
	I0731 21:07:29.735779  188656 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 21:07:29.735830  188656 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 21:07:29.735879  188656 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 21:07:29.735889  188656 kubeadm.go:310] 
	I0731 21:07:29.736038  188656 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 21:07:29.736129  188656 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 21:07:29.736141  188656 kubeadm.go:310] 
	I0731 21:07:29.736241  188656 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 21:07:29.736315  188656 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 21:07:29.736400  188656 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 21:07:29.736480  188656 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 21:07:29.736537  188656 kubeadm.go:310] 
	I0731 21:07:29.736579  188656 kubeadm.go:394] duration metric: took 7m58.053099483s to StartCluster
	I0731 21:07:29.736660  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:07:29.736793  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:07:29.802897  188656 cri.go:89] found id: ""
	I0731 21:07:29.802932  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.802945  188656 logs.go:278] No container was found matching "kube-apiserver"
	I0731 21:07:29.802953  188656 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:07:29.803021  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:07:29.840059  188656 cri.go:89] found id: ""
	I0731 21:07:29.840088  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.840098  188656 logs.go:278] No container was found matching "etcd"
	I0731 21:07:29.840106  188656 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:07:29.840178  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:07:29.881030  188656 cri.go:89] found id: ""
	I0731 21:07:29.881058  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.881066  188656 logs.go:278] No container was found matching "coredns"
	I0731 21:07:29.881073  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:07:29.881150  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:07:29.923495  188656 cri.go:89] found id: ""
	I0731 21:07:29.923524  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.923532  188656 logs.go:278] No container was found matching "kube-scheduler"
	I0731 21:07:29.923538  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:07:29.923604  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:07:29.966128  188656 cri.go:89] found id: ""
	I0731 21:07:29.966156  188656 logs.go:276] 0 containers: []
	W0731 21:07:29.966164  188656 logs.go:278] No container was found matching "kube-proxy"
	I0731 21:07:29.966171  188656 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:07:29.966236  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:07:30.007648  188656 cri.go:89] found id: ""
	I0731 21:07:30.007678  188656 logs.go:276] 0 containers: []
	W0731 21:07:30.007687  188656 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 21:07:30.007693  188656 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:07:30.007748  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:07:30.047857  188656 cri.go:89] found id: ""
	I0731 21:07:30.047887  188656 logs.go:276] 0 containers: []
	W0731 21:07:30.047903  188656 logs.go:278] No container was found matching "kindnet"
	I0731 21:07:30.047909  188656 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0731 21:07:30.047959  188656 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0731 21:07:30.087245  188656 cri.go:89] found id: ""
	I0731 21:07:30.087275  188656 logs.go:276] 0 containers: []
	W0731 21:07:30.087283  188656 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0731 21:07:30.087294  188656 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:07:30.087308  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 21:07:30.168205  188656 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 21:07:30.168235  188656 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:07:30.168256  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:07:30.276908  188656 logs.go:123] Gathering logs for container status ...
	I0731 21:07:30.276951  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:07:30.322993  188656 logs.go:123] Gathering logs for kubelet ...
	I0731 21:07:30.323030  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 21:07:30.375237  188656 logs.go:123] Gathering logs for dmesg ...
	I0731 21:07:30.375287  188656 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0731 21:07:30.392523  188656 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0731 21:07:30.392579  188656 out.go:239] * 
	W0731 21:07:30.392653  188656 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:07:30.392683  188656 out.go:239] * 
	W0731 21:07:30.393845  188656 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 21:07:30.397498  188656 out.go:177] 
	W0731 21:07:30.398890  188656 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 21:07:30.398959  188656 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0731 21:07:30.398995  188656 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0731 21:07:30.401295  188656 out.go:177] 
	
	
	==> CRI-O <==
	Jul 31 21:18:04 old-k8s-version-239115 crio[646]: time="2024-07-31 21:18:04.847169890Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460684847130611,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eafc9dba-5740-4686-8e3c-44c4b6b87464 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:18:04 old-k8s-version-239115 crio[646]: time="2024-07-31 21:18:04.847824335Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1afd0a9a-464b-4732-9b2d-cadca3ae997d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:18:04 old-k8s-version-239115 crio[646]: time="2024-07-31 21:18:04.847896321Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1afd0a9a-464b-4732-9b2d-cadca3ae997d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:18:04 old-k8s-version-239115 crio[646]: time="2024-07-31 21:18:04.847926582Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1afd0a9a-464b-4732-9b2d-cadca3ae997d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:18:04 old-k8s-version-239115 crio[646]: time="2024-07-31 21:18:04.882935396Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9169581a-7f09-472b-a6b0-1a1e4f899a4a name=/runtime.v1.RuntimeService/Version
	Jul 31 21:18:04 old-k8s-version-239115 crio[646]: time="2024-07-31 21:18:04.883024246Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9169581a-7f09-472b-a6b0-1a1e4f899a4a name=/runtime.v1.RuntimeService/Version
	Jul 31 21:18:04 old-k8s-version-239115 crio[646]: time="2024-07-31 21:18:04.884493425Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0ccf5f09-595f-4600-96b5-e4f4b3ed92c1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:18:04 old-k8s-version-239115 crio[646]: time="2024-07-31 21:18:04.884890256Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460684884868761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0ccf5f09-595f-4600-96b5-e4f4b3ed92c1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:18:04 old-k8s-version-239115 crio[646]: time="2024-07-31 21:18:04.885540952Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b4be37c-ae66-40fa-9597-e2e1ce850a4d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:18:04 old-k8s-version-239115 crio[646]: time="2024-07-31 21:18:04.885597439Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b4be37c-ae66-40fa-9597-e2e1ce850a4d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:18:04 old-k8s-version-239115 crio[646]: time="2024-07-31 21:18:04.885627861Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8b4be37c-ae66-40fa-9597-e2e1ce850a4d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:18:04 old-k8s-version-239115 crio[646]: time="2024-07-31 21:18:04.918526079Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=63ac5899-c717-4e88-acd7-4ebb6211d0c4 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:18:04 old-k8s-version-239115 crio[646]: time="2024-07-31 21:18:04.918598404Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=63ac5899-c717-4e88-acd7-4ebb6211d0c4 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:18:04 old-k8s-version-239115 crio[646]: time="2024-07-31 21:18:04.919981974Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ff2663a9-f9dd-4654-bac8-4e33f837f714 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:18:04 old-k8s-version-239115 crio[646]: time="2024-07-31 21:18:04.920447172Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460684920421102,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ff2663a9-f9dd-4654-bac8-4e33f837f714 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:18:04 old-k8s-version-239115 crio[646]: time="2024-07-31 21:18:04.921076886Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=daa407d9-378e-4916-bbd3-0655e57b7f4d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:18:04 old-k8s-version-239115 crio[646]: time="2024-07-31 21:18:04.921156846Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=daa407d9-378e-4916-bbd3-0655e57b7f4d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:18:04 old-k8s-version-239115 crio[646]: time="2024-07-31 21:18:04.921197680Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=daa407d9-378e-4916-bbd3-0655e57b7f4d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:18:04 old-k8s-version-239115 crio[646]: time="2024-07-31 21:18:04.958448878Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=415019fb-f4f6-47f8-a25c-e9807550eaf9 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:18:04 old-k8s-version-239115 crio[646]: time="2024-07-31 21:18:04.958577112Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=415019fb-f4f6-47f8-a25c-e9807550eaf9 name=/runtime.v1.RuntimeService/Version
	Jul 31 21:18:04 old-k8s-version-239115 crio[646]: time="2024-07-31 21:18:04.959799635Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f333a306-a9cd-448c-a010-224b2ffbff17 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:18:04 old-k8s-version-239115 crio[646]: time="2024-07-31 21:18:04.960417345Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722460684960389703,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f333a306-a9cd-448c-a010-224b2ffbff17 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 21:18:04 old-k8s-version-239115 crio[646]: time="2024-07-31 21:18:04.961118763Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33a6e879-f981-4179-8e02-0d0c1b8988ee name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:18:04 old-k8s-version-239115 crio[646]: time="2024-07-31 21:18:04.961261217Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33a6e879-f981-4179-8e02-0d0c1b8988ee name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 21:18:04 old-k8s-version-239115 crio[646]: time="2024-07-31 21:18:04.961318727Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=33a6e879-f981-4179-8e02-0d0c1b8988ee name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul31 20:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.062231] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.050403] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.190389] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.608719] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.611027] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.653908] systemd-fstab-generator[565]: Ignoring "noauto" option for root device
	[  +0.062587] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060554] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.234631] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.143128] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.268421] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +6.725014] systemd-fstab-generator[835]: Ignoring "noauto" option for root device
	[  +0.065215] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.078703] systemd-fstab-generator[960]: Ignoring "noauto" option for root device
	[ +10.116461] kauditd_printk_skb: 46 callbacks suppressed
	[Jul31 21:03] systemd-fstab-generator[5008]: Ignoring "noauto" option for root device
	[Jul31 21:05] systemd-fstab-generator[5292]: Ignoring "noauto" option for root device
	[  +0.069669] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:18:05 up 19 min,  0 users,  load average: 0.00, 0.02, 0.05
	Linux old-k8s-version-239115 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 31 21:18:02 old-k8s-version-239115 kubelet[6703]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001000c0, 0xc000aaa000)
	Jul 31 21:18:02 old-k8s-version-239115 kubelet[6703]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Jul 31 21:18:02 old-k8s-version-239115 kubelet[6703]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Jul 31 21:18:02 old-k8s-version-239115 kubelet[6703]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Jul 31 21:18:02 old-k8s-version-239115 kubelet[6703]: goroutine 158 [select]:
	Jul 31 21:18:02 old-k8s-version-239115 kubelet[6703]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009edef0, 0x4f0ac20, 0xc000a1caa0, 0x1, 0xc0001000c0)
	Jul 31 21:18:02 old-k8s-version-239115 kubelet[6703]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Jul 31 21:18:02 old-k8s-version-239115 kubelet[6703]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0000e9180, 0xc0001000c0)
	Jul 31 21:18:02 old-k8s-version-239115 kubelet[6703]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jul 31 21:18:02 old-k8s-version-239115 kubelet[6703]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jul 31 21:18:02 old-k8s-version-239115 kubelet[6703]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jul 31 21:18:02 old-k8s-version-239115 kubelet[6703]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000aa6290, 0xc000a20fc0)
	Jul 31 21:18:02 old-k8s-version-239115 kubelet[6703]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jul 31 21:18:02 old-k8s-version-239115 kubelet[6703]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jul 31 21:18:02 old-k8s-version-239115 kubelet[6703]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jul 31 21:18:02 old-k8s-version-239115 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 31 21:18:02 old-k8s-version-239115 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 31 21:18:03 old-k8s-version-239115 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 130.
	Jul 31 21:18:03 old-k8s-version-239115 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 31 21:18:03 old-k8s-version-239115 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 31 21:18:03 old-k8s-version-239115 kubelet[6712]: I0731 21:18:03.226274    6712 server.go:416] Version: v1.20.0
	Jul 31 21:18:03 old-k8s-version-239115 kubelet[6712]: I0731 21:18:03.226690    6712 server.go:837] Client rotation is on, will bootstrap in background
	Jul 31 21:18:03 old-k8s-version-239115 kubelet[6712]: I0731 21:18:03.229000    6712 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 31 21:18:03 old-k8s-version-239115 kubelet[6712]: W0731 21:18:03.230031    6712 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 31 21:18:03 old-k8s-version-239115 kubelet[6712]: I0731 21:18:03.230070    6712 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-239115 -n old-k8s-version-239115
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-239115 -n old-k8s-version-239115: exit status 2 (230.679313ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-239115" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (88.90s)

                                                
                                    

Test pass (256/326)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 52.93
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.3/json-events 13.14
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.06
18 TestDownloadOnly/v1.30.3/DeleteAll 0.13
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.12
21 TestDownloadOnly/v1.31.0-beta.0/json-events 48.59
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.14
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.12
30 TestBinaryMirror 0.56
31 TestOffline 73.81
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 208.48
40 TestAddons/serial/GCPAuth/Namespaces 1.52
42 TestAddons/parallel/Registry 21.63
44 TestAddons/parallel/InspektorGadget 10.75
46 TestAddons/parallel/HelmTiller 11.58
48 TestAddons/parallel/CSI 45.67
49 TestAddons/parallel/Headlamp 21.98
50 TestAddons/parallel/CloudSpanner 5.6
51 TestAddons/parallel/LocalPath 62.19
52 TestAddons/parallel/NvidiaDevicePlugin 6.57
53 TestAddons/parallel/Yakd 12.02
55 TestCertOptions 83.95
56 TestCertExpiration 303.48
58 TestForceSystemdFlag 118.05
59 TestForceSystemdEnv 91.94
61 TestKVMDriverInstallOrUpdate 7.16
65 TestErrorSpam/setup 40.6
66 TestErrorSpam/start 0.33
67 TestErrorSpam/status 0.7
68 TestErrorSpam/pause 1.54
69 TestErrorSpam/unpause 1.57
70 TestErrorSpam/stop 5.24
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 67.02
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 35.96
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.08
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.19
82 TestFunctional/serial/CacheCmd/cache/add_local 2.17
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
84 TestFunctional/serial/CacheCmd/cache/list 0.05
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.55
87 TestFunctional/serial/CacheCmd/cache/delete 0.09
88 TestFunctional/serial/MinikubeKubectlCmd 0.11
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
90 TestFunctional/serial/ExtraConfig 31.35
91 TestFunctional/serial/ComponentHealth 0.07
92 TestFunctional/serial/LogsCmd 1.45
93 TestFunctional/serial/LogsFileCmd 1.5
94 TestFunctional/serial/InvalidService 4.9
96 TestFunctional/parallel/ConfigCmd 0.31
97 TestFunctional/parallel/DashboardCmd 18.27
98 TestFunctional/parallel/DryRun 0.33
99 TestFunctional/parallel/InternationalLanguage 0.15
100 TestFunctional/parallel/StatusCmd 1.2
104 TestFunctional/parallel/ServiceCmdConnect 7.57
105 TestFunctional/parallel/AddonsCmd 0.14
106 TestFunctional/parallel/PersistentVolumeClaim 44.71
108 TestFunctional/parallel/SSHCmd 0.38
109 TestFunctional/parallel/CpCmd 1.43
110 TestFunctional/parallel/MySQL 26.88
111 TestFunctional/parallel/FileSync 0.19
112 TestFunctional/parallel/CertSync 1.44
116 TestFunctional/parallel/NodeLabels 0.06
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.41
120 TestFunctional/parallel/License 0.71
121 TestFunctional/parallel/ServiceCmd/DeployApp 11.2
122 TestFunctional/parallel/ProfileCmd/profile_not_create 0.3
123 TestFunctional/parallel/ProfileCmd/profile_list 0.29
124 TestFunctional/parallel/MountCmd/any-port 10.93
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
126 TestFunctional/parallel/ServiceCmd/List 0.47
127 TestFunctional/parallel/MountCmd/specific-port 1.56
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.45
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.27
130 TestFunctional/parallel/ServiceCmd/Format 0.28
131 TestFunctional/parallel/ServiceCmd/URL 0.27
132 TestFunctional/parallel/MountCmd/VerifyCleanup 0.74
142 TestFunctional/parallel/Version/short 0.05
143 TestFunctional/parallel/Version/components 0.84
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
147 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
148 TestFunctional/parallel/ImageCommands/ImageListTable 0.46
149 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
150 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
152 TestFunctional/parallel/ImageCommands/Setup 1.94
153 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.87
154 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.86
155 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.2
156 TestFunctional/parallel/ImageCommands/ImageSaveToFile 3.45
157 TestFunctional/parallel/ImageCommands/ImageRemove 0.46
158 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.89
159 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
160 TestFunctional/delete_echo-server_images 0.03
161 TestFunctional/delete_my-image_image 0.01
162 TestFunctional/delete_minikube_cached_images 0.01
166 TestMultiControlPlane/serial/StartCluster 277.24
167 TestMultiControlPlane/serial/DeployApp 6.44
168 TestMultiControlPlane/serial/PingHostFromPods 1.27
169 TestMultiControlPlane/serial/AddWorkerNode 55.56
170 TestMultiControlPlane/serial/NodeLabels 0.07
171 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.54
172 TestMultiControlPlane/serial/CopyFile 12.87
174 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.46
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.39
178 TestMultiControlPlane/serial/DeleteSecondaryNode 17.18
179 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
181 TestMultiControlPlane/serial/RestartCluster 345.43
182 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.38
183 TestMultiControlPlane/serial/AddSecondaryNode 81.28
184 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.53
188 TestJSONOutput/start/Command 55.99
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.77
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.65
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 9.36
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.19
216 TestMainNoArgs 0.04
217 TestMinikubeProfile 93.54
220 TestMountStart/serial/StartWithMountFirst 27.68
221 TestMountStart/serial/VerifyMountFirst 0.37
222 TestMountStart/serial/StartWithMountSecond 28.81
223 TestMountStart/serial/VerifyMountSecond 0.37
224 TestMountStart/serial/DeleteFirst 0.67
225 TestMountStart/serial/VerifyMountPostDelete 0.38
226 TestMountStart/serial/Stop 1.28
227 TestMountStart/serial/RestartStopped 24.46
228 TestMountStart/serial/VerifyMountPostStop 0.38
231 TestMultiNode/serial/FreshStart2Nodes 125.91
232 TestMultiNode/serial/DeployApp2Nodes 5.58
233 TestMultiNode/serial/PingHostFrom2Pods 0.8
234 TestMultiNode/serial/AddNode 52.38
235 TestMultiNode/serial/MultiNodeLabels 0.06
236 TestMultiNode/serial/ProfileList 0.21
237 TestMultiNode/serial/CopyFile 7.09
238 TestMultiNode/serial/StopNode 2.3
239 TestMultiNode/serial/StartAfterStop 40.57
241 TestMultiNode/serial/DeleteNode 2.15
243 TestMultiNode/serial/RestartMultiNode 181.99
244 TestMultiNode/serial/ValidateNameConflict 44.68
251 TestScheduledStopUnix 117.21
255 TestRunningBinaryUpgrade 205.52
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
264 TestNoKubernetes/serial/StartWithK8s 72.01
269 TestNetworkPlugins/group/false 2.92
273 TestNoKubernetes/serial/StartWithStopK8s 45.95
274 TestNoKubernetes/serial/Start 72.03
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
276 TestNoKubernetes/serial/ProfileList 0.79
277 TestNoKubernetes/serial/Stop 1.3
278 TestNoKubernetes/serial/StartNoArgs 65.3
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
280 TestStoppedBinaryUpgrade/Setup 2.63
281 TestStoppedBinaryUpgrade/Upgrade 114.98
290 TestPause/serial/Start 58.21
291 TestStoppedBinaryUpgrade/MinikubeLogs 1.08
292 TestNetworkPlugins/group/auto/Start 75.51
293 TestNetworkPlugins/group/kindnet/Start 102.41
294 TestPause/serial/SecondStartNoReconfiguration 78.22
295 TestNetworkPlugins/group/auto/KubeletFlags 0.2
296 TestNetworkPlugins/group/auto/NetCatPod 9.24
297 TestNetworkPlugins/group/auto/DNS 0.19
298 TestNetworkPlugins/group/auto/Localhost 0.16
299 TestNetworkPlugins/group/auto/HairPin 0.14
300 TestPause/serial/Pause 0.79
301 TestPause/serial/VerifyStatus 0.27
302 TestPause/serial/Unpause 0.72
303 TestPause/serial/PauseAgain 0.88
304 TestPause/serial/DeletePaused 1.02
305 TestPause/serial/VerifyDeletedResources 0.55
306 TestNetworkPlugins/group/calico/Start 90.79
307 TestNetworkPlugins/group/custom-flannel/Start 101.88
308 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
309 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
310 TestNetworkPlugins/group/kindnet/NetCatPod 10.23
311 TestNetworkPlugins/group/kindnet/DNS 0.16
312 TestNetworkPlugins/group/kindnet/Localhost 0.13
313 TestNetworkPlugins/group/kindnet/HairPin 0.13
314 TestNetworkPlugins/group/enable-default-cni/Start 105.63
315 TestNetworkPlugins/group/calico/ControllerPod 6.01
316 TestNetworkPlugins/group/calico/KubeletFlags 0.2
317 TestNetworkPlugins/group/calico/NetCatPod 11.55
318 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
319 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.38
320 TestNetworkPlugins/group/calico/DNS 0.22
321 TestNetworkPlugins/group/calico/Localhost 0.15
322 TestNetworkPlugins/group/calico/HairPin 0.16
323 TestNetworkPlugins/group/custom-flannel/DNS 0.2
324 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
325 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
326 TestNetworkPlugins/group/flannel/Start 86.65
327 TestNetworkPlugins/group/bridge/Start 85.88
330 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
331 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.22
332 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
333 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
334 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
336 TestStartStop/group/no-preload/serial/FirstStart 165.59
337 TestNetworkPlugins/group/flannel/ControllerPod 6.01
338 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
339 TestNetworkPlugins/group/flannel/NetCatPod 15.31
340 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
341 TestNetworkPlugins/group/bridge/NetCatPod 11.34
342 TestNetworkPlugins/group/flannel/DNS 0.15
343 TestNetworkPlugins/group/flannel/Localhost 0.14
344 TestNetworkPlugins/group/flannel/HairPin 0.14
345 TestNetworkPlugins/group/bridge/DNS 0.16
346 TestNetworkPlugins/group/bridge/Localhost 0.13
347 TestNetworkPlugins/group/bridge/HairPin 0.11
349 TestStartStop/group/embed-certs/serial/FirstStart 64.22
351 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 98.47
352 TestStartStop/group/embed-certs/serial/DeployApp 12.72
353 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.05
355 TestStartStop/group/no-preload/serial/DeployApp 10.3
356 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.98
358 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.27
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.96
364 TestStartStop/group/embed-certs/serial/SecondStart 644.06
367 TestStartStop/group/no-preload/serial/SecondStart 602.9
368 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 562.61
369 TestStartStop/group/old-k8s-version/serial/Stop 1.36
370 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
381 TestStartStop/group/newest-cni/serial/FirstStart 49.52
382 TestStartStop/group/newest-cni/serial/DeployApp 0
383 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1
384 TestStartStop/group/newest-cni/serial/Stop 7.3
385 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
386 TestStartStop/group/newest-cni/serial/SecondStart 36.89
387 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
388 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
389 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
390 TestStartStop/group/newest-cni/serial/Pause 2.44
x
+
TestDownloadOnly/v1.20.0/json-events (52.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-149010 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-149010 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (52.931626487s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (52.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-149010
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-149010: exit status 85 (59.225997ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-149010 | jenkins | v1.33.1 | 31 Jul 24 19:27 UTC |          |
	|         | -p download-only-149010        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 19:27:07
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 19:27:07.823160  128903 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:27:07.823281  128903 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:27:07.823292  128903 out.go:304] Setting ErrFile to fd 2...
	I0731 19:27:07.823297  128903 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:27:07.823504  128903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	W0731 19:27:07.823628  128903 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19355-121704/.minikube/config/config.json: open /home/jenkins/minikube-integration/19355-121704/.minikube/config/config.json: no such file or directory
	I0731 19:27:07.824203  128903 out.go:298] Setting JSON to true
	I0731 19:27:07.825069  128903 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4164,"bootTime":1722449864,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 19:27:07.825128  128903 start.go:139] virtualization: kvm guest
	I0731 19:27:07.827782  128903 out.go:97] [download-only-149010] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0731 19:27:07.827881  128903 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball: no such file or directory
	I0731 19:27:07.827931  128903 notify.go:220] Checking for updates...
	I0731 19:27:07.829239  128903 out.go:169] MINIKUBE_LOCATION=19355
	I0731 19:27:07.830546  128903 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 19:27:07.831800  128903 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 19:27:07.832992  128903 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 19:27:07.834222  128903 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0731 19:27:07.836732  128903 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 19:27:07.836937  128903 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 19:27:07.934041  128903 out.go:97] Using the kvm2 driver based on user configuration
	I0731 19:27:07.934090  128903 start.go:297] selected driver: kvm2
	I0731 19:27:07.934100  128903 start.go:901] validating driver "kvm2" against <nil>
	I0731 19:27:07.934444  128903 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:27:07.934590  128903 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-121704/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 19:27:07.949836  128903 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 19:27:07.949892  128903 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 19:27:07.950344  128903 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0731 19:27:07.950507  128903 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 19:27:07.950537  128903 cni.go:84] Creating CNI manager for ""
	I0731 19:27:07.950549  128903 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 19:27:07.950562  128903 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 19:27:07.950624  128903 start.go:340] cluster config:
	{Name:download-only-149010 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-149010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:27:07.950818  128903 iso.go:125] acquiring lock: {Name:mk69fc0fe37180b19ce91307b5e12ab2d0bd69fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:27:07.952708  128903 out.go:97] Downloading VM boot image ...
	I0731 19:27:07.952752  128903 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19355-121704/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0731 19:27:19.106786  128903 out.go:97] Starting "download-only-149010" primary control-plane node in "download-only-149010" cluster
	I0731 19:27:19.106816  128903 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 19:27:19.214589  128903 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0731 19:27:19.214648  128903 cache.go:56] Caching tarball of preloaded images
	I0731 19:27:19.214839  128903 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 19:27:19.216801  128903 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0731 19:27:19.216834  128903 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0731 19:27:19.330737  128903 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0731 19:27:32.314140  128903 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0731 19:27:32.314241  128903 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0731 19:27:33.204442  128903 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0731 19:27:33.204858  128903 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/download-only-149010/config.json ...
	I0731 19:27:33.204897  128903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/download-only-149010/config.json: {Name:mk33c0c2633c4287d3570ed5b8e99d4c29692b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:27:33.205066  128903 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 19:27:33.205221  128903 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19355-121704/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-149010 host does not exist
	  To start a cluster, run: "minikube start -p download-only-149010"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-149010
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (13.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-430731 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-430731 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.141886058s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (13.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-430731
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-430731: exit status 85 (60.75566ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-149010 | jenkins | v1.33.1 | 31 Jul 24 19:27 UTC |                     |
	|         | -p download-only-149010        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 31 Jul 24 19:28 UTC | 31 Jul 24 19:28 UTC |
	| delete  | -p download-only-149010        | download-only-149010 | jenkins | v1.33.1 | 31 Jul 24 19:28 UTC | 31 Jul 24 19:28 UTC |
	| start   | -o=json --download-only        | download-only-430731 | jenkins | v1.33.1 | 31 Jul 24 19:28 UTC |                     |
	|         | -p download-only-430731        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 19:28:01
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 19:28:01.075151  129256 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:28:01.075411  129256 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:28:01.075420  129256 out.go:304] Setting ErrFile to fd 2...
	I0731 19:28:01.075424  129256 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:28:01.075634  129256 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 19:28:01.076185  129256 out.go:298] Setting JSON to true
	I0731 19:28:01.077047  129256 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4217,"bootTime":1722449864,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 19:28:01.077113  129256 start.go:139] virtualization: kvm guest
	I0731 19:28:01.079294  129256 out.go:97] [download-only-430731] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 19:28:01.079440  129256 notify.go:220] Checking for updates...
	I0731 19:28:01.080873  129256 out.go:169] MINIKUBE_LOCATION=19355
	I0731 19:28:01.082322  129256 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 19:28:01.083853  129256 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 19:28:01.085064  129256 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 19:28:01.086229  129256 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0731 19:28:01.088631  129256 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 19:28:01.088856  129256 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 19:28:01.121419  129256 out.go:97] Using the kvm2 driver based on user configuration
	I0731 19:28:01.121449  129256 start.go:297] selected driver: kvm2
	I0731 19:28:01.121455  129256 start.go:901] validating driver "kvm2" against <nil>
	I0731 19:28:01.121790  129256 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:28:01.121888  129256 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-121704/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 19:28:01.137286  129256 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 19:28:01.137375  129256 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 19:28:01.138052  129256 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0731 19:28:01.138282  129256 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 19:28:01.138319  129256 cni.go:84] Creating CNI manager for ""
	I0731 19:28:01.138333  129256 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 19:28:01.138343  129256 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 19:28:01.138430  129256 start.go:340] cluster config:
	{Name:download-only-430731 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-430731 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:28:01.138568  129256 iso.go:125] acquiring lock: {Name:mk69fc0fe37180b19ce91307b5e12ab2d0bd69fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:28:01.140269  129256 out.go:97] Starting "download-only-430731" primary control-plane node in "download-only-430731" cluster
	I0731 19:28:01.140295  129256 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 19:28:01.252209  129256 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 19:28:01.252240  129256 cache.go:56] Caching tarball of preloaded images
	I0731 19:28:01.252394  129256 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 19:28:01.254418  129256 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0731 19:28:01.254434  129256 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0731 19:28:01.365754  129256 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:15191286f02471d9b3ea0b587fcafc39 -> /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-430731 host does not exist
	  To start a cluster, run: "minikube start -p download-only-430731"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-430731
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (48.59s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-373672 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-373672 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (48.58930457s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (48.59s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-373672
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-373672: exit status 85 (63.504518ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-149010 | jenkins | v1.33.1 | 31 Jul 24 19:27 UTC |                     |
	|         | -p download-only-149010             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 31 Jul 24 19:28 UTC | 31 Jul 24 19:28 UTC |
	| delete  | -p download-only-149010             | download-only-149010 | jenkins | v1.33.1 | 31 Jul 24 19:28 UTC | 31 Jul 24 19:28 UTC |
	| start   | -o=json --download-only             | download-only-430731 | jenkins | v1.33.1 | 31 Jul 24 19:28 UTC |                     |
	|         | -p download-only-430731             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 31 Jul 24 19:28 UTC | 31 Jul 24 19:28 UTC |
	| delete  | -p download-only-430731             | download-only-430731 | jenkins | v1.33.1 | 31 Jul 24 19:28 UTC | 31 Jul 24 19:28 UTC |
	| start   | -o=json --download-only             | download-only-373672 | jenkins | v1.33.1 | 31 Jul 24 19:28 UTC |                     |
	|         | -p download-only-373672             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 19:28:14
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 19:28:14.523372  129466 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:28:14.523476  129466 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:28:14.523483  129466 out.go:304] Setting ErrFile to fd 2...
	I0731 19:28:14.523487  129466 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:28:14.523697  129466 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 19:28:14.524216  129466 out.go:298] Setting JSON to true
	I0731 19:28:14.525032  129466 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4230,"bootTime":1722449864,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 19:28:14.525090  129466 start.go:139] virtualization: kvm guest
	I0731 19:28:14.527362  129466 out.go:97] [download-only-373672] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 19:28:14.527500  129466 notify.go:220] Checking for updates...
	I0731 19:28:14.528994  129466 out.go:169] MINIKUBE_LOCATION=19355
	I0731 19:28:14.530412  129466 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 19:28:14.531765  129466 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 19:28:14.533031  129466 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 19:28:14.534252  129466 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0731 19:28:14.536911  129466 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 19:28:14.537238  129466 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 19:28:14.568651  129466 out.go:97] Using the kvm2 driver based on user configuration
	I0731 19:28:14.568678  129466 start.go:297] selected driver: kvm2
	I0731 19:28:14.568686  129466 start.go:901] validating driver "kvm2" against <nil>
	I0731 19:28:14.569093  129466 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:28:14.569191  129466 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-121704/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 19:28:14.583827  129466 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 19:28:14.583892  129466 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 19:28:14.584549  129466 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0731 19:28:14.584744  129466 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 19:28:14.584776  129466 cni.go:84] Creating CNI manager for ""
	I0731 19:28:14.584790  129466 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 19:28:14.584804  129466 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 19:28:14.584873  129466 start.go:340] cluster config:
	{Name:download-only-373672 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-373672 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:28:14.584997  129466 iso.go:125] acquiring lock: {Name:mk69fc0fe37180b19ce91307b5e12ab2d0bd69fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:28:14.586851  129466 out.go:97] Starting "download-only-373672" primary control-plane node in "download-only-373672" cluster
	I0731 19:28:14.586880  129466 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 19:28:14.694722  129466 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0731 19:28:14.694761  129466 cache.go:56] Caching tarball of preloaded images
	I0731 19:28:14.694927  129466 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 19:28:14.696822  129466 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0731 19:28:14.696841  129466 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0731 19:28:14.806704  129466 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:3743f5ddb63994a661f14e5a8d3af98c -> /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0731 19:28:25.325071  129466 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0731 19:28:25.325184  129466 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19355-121704/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0731 19:28:26.047863  129466 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0731 19:28:26.048253  129466 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/download-only-373672/config.json ...
	I0731 19:28:26.048284  129466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/download-only-373672/config.json: {Name:mk071afd302c3c644b0e6317c16bf30f4333f326 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:28:26.048483  129466 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 19:28:26.048613  129466 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19355-121704/.minikube/cache/linux/amd64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-373672 host does not exist
	  To start a cluster, run: "minikube start -p download-only-373672"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-373672
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-281803 --alsologtostderr --binary-mirror http://127.0.0.1:37353 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-281803" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-281803
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (73.81s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-842070 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-842070 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m12.811091642s)
helpers_test.go:175: Cleaning up "offline-crio-842070" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-842070
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-842070: (1.003430171s)
--- PASS: TestOffline (73.81s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-715925
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-715925: exit status 85 (49.671341ms)

                                                
                                                
-- stdout --
	* Profile "addons-715925" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-715925"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-715925
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-715925: exit status 85 (49.286221ms)

                                                
                                                
-- stdout --
	* Profile "addons-715925" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-715925"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (208.48s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-715925 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-715925 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m28.477744994s)
--- PASS: TestAddons/Setup (208.48s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (1.52s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-715925 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-715925 get secret gcp-auth -n new-namespace
addons_test.go:670: (dbg) Non-zero exit: kubectl --context addons-715925 get secret gcp-auth -n new-namespace: exit status 1 (84.806659ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:662: (dbg) Run:  kubectl --context addons-715925 logs -l app=gcp-auth -n gcp-auth
addons_test.go:670: (dbg) Run:  kubectl --context addons-715925 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (1.52s)

                                                
                                    
x
+
TestAddons/parallel/Registry (21.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.879969ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-x87x7" [2a48b934-362f-4a2d-b591-308e178c9f76] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.006321077s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-2j7k4" [2550e10a-7f6c-463d-a4b7-da2406bd5137] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004890323s
addons_test.go:342: (dbg) Run:  kubectl --context addons-715925 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-715925 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-715925 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (9.786748824s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-715925 ip
2024/07/31 19:33:14 [DEBUG] GET http://192.168.39.147:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-715925 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (21.63s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.75s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-fcj87" [db698a8e-e32b-4ee0-93a6-82cc059e7064] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005001864s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-715925
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-715925: (5.746426468s)
--- PASS: TestAddons/parallel/InspektorGadget (10.75s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.58s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.875077ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-9f7w2" [451aed79-261a-45ab-aa7c-e595c0dd9688] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.006338049s
addons_test.go:475: (dbg) Run:  kubectl --context addons-715925 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-715925 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.946486527s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-715925 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.58s)

                                                
                                    
x
+
TestAddons/parallel/CSI (45.67s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 8.483277ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-715925 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715925 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715925 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715925 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715925 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715925 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715925 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715925 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-715925 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [74064d9f-a570-4e95-aeaf-48b50249652d] Pending
helpers_test.go:344: "task-pv-pod" [74064d9f-a570-4e95-aeaf-48b50249652d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [74064d9f-a570-4e95-aeaf-48b50249652d] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.004141206s
addons_test.go:590: (dbg) Run:  kubectl --context addons-715925 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-715925 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-715925 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-715925 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-715925 delete pod task-pv-pod: (1.391234651s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-715925 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-715925 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715925 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715925 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715925 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715925 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715925 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715925 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-715925 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [37fbd67c-0cbe-4e69-afab-27d24123052a] Pending
helpers_test.go:344: "task-pv-pod-restore" [37fbd67c-0cbe-4e69-afab-27d24123052a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [37fbd67c-0cbe-4e69-afab-27d24123052a] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.006361856s
addons_test.go:632: (dbg) Run:  kubectl --context addons-715925 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-715925 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-715925 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-715925 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-715925 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.82302383s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-715925 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-amd64 -p addons-715925 addons disable volumesnapshots --alsologtostderr -v=1: (1.412700022s)
--- PASS: TestAddons/parallel/CSI (45.67s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (21.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-715925 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-715925 --alsologtostderr -v=1: (1.078760126s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-b2kms" [97207dec-243f-476c-858e-3d07887ef406] Pending
helpers_test.go:344: "headlamp-7867546754-b2kms" [97207dec-243f-476c-858e-3d07887ef406] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-b2kms" [97207dec-243f-476c-858e-3d07887ef406] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.003805295s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-715925 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-715925 addons disable headlamp --alsologtostderr -v=1: (5.89528354s)
--- PASS: TestAddons/parallel/Headlamp (21.98s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-h8g54" [25a15f02-3be5-4333-868c-4b2fde068765] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00534077s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-715925
--- PASS: TestAddons/parallel/CloudSpanner (5.60s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (62.19s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-715925 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-715925 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715925 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715925 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715925 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715925 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715925 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715925 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715925 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-715925 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [4d78bab4-0311-4b3b-9243-76ccf624dc8d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [4d78bab4-0311-4b3b-9243-76ccf624dc8d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [4d78bab4-0311-4b3b-9243-76ccf624dc8d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 11.004261364s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-715925 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-715925 ssh "cat /opt/local-path-provisioner/pvc-7abc566a-0469-49d9-9aef-8963a9d00867_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-715925 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-715925 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-715925 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-715925 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.312070279s)
--- PASS: TestAddons/parallel/LocalPath (62.19s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.57s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-2p88n" [8b668c12-5647-4aa6-b190-d9e2e127ea94] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005439178s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-715925
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.57s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-vjf29" [269c2cac-c9ea-4ae0-9d2c-f6b8bee99406] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004918273s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-715925 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-715925 addons disable yakd --alsologtostderr -v=1: (6.012598447s)
--- PASS: TestAddons/parallel/Yakd (12.02s)

                                                
                                    
x
+
TestCertOptions (83.95s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-381007 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-381007 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m22.741116921s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-381007 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-381007 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-381007 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-381007" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-381007
--- PASS: TestCertOptions (83.95s)

                                                
                                    
x
+
TestCertExpiration (303.48s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-812046 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-812046 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m22.72283742s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-812046 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-812046 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (39.911406927s)
helpers_test.go:175: Cleaning up "cert-expiration-812046" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-812046
--- PASS: TestCertExpiration (303.48s)

                                                
                                    
x
+
TestForceSystemdFlag (118.05s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-702124 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-702124 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m56.844863003s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-702124 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-702124" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-702124
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-702124: (1.015307894s)
--- PASS: TestForceSystemdFlag (118.05s)

                                                
                                    
x
+
TestForceSystemdEnv (91.94s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-844115 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-844115 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m31.12842847s)
helpers_test.go:175: Cleaning up "force-systemd-env-844115" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-844115
--- PASS: TestForceSystemdEnv (91.94s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (7.16s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (7.16s)

                                                
                                    
x
+
TestErrorSpam/setup (40.6s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-171449 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-171449 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-171449 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-171449 --driver=kvm2  --container-runtime=crio: (40.603175931s)
--- PASS: TestErrorSpam/setup (40.60s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-171449 --log_dir /tmp/nospam-171449 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-171449 --log_dir /tmp/nospam-171449 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-171449 --log_dir /tmp/nospam-171449 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-171449 --log_dir /tmp/nospam-171449 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-171449 --log_dir /tmp/nospam-171449 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-171449 --log_dir /tmp/nospam-171449 status
--- PASS: TestErrorSpam/status (0.70s)

                                                
                                    
x
+
TestErrorSpam/pause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-171449 --log_dir /tmp/nospam-171449 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-171449 --log_dir /tmp/nospam-171449 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-171449 --log_dir /tmp/nospam-171449 pause
--- PASS: TestErrorSpam/pause (1.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-171449 --log_dir /tmp/nospam-171449 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-171449 --log_dir /tmp/nospam-171449 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-171449 --log_dir /tmp/nospam-171449 unpause
--- PASS: TestErrorSpam/unpause (1.57s)

                                                
                                    
x
+
TestErrorSpam/stop (5.24s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-171449 --log_dir /tmp/nospam-171449 stop
E0731 19:42:34.577414  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
E0731 19:42:34.583286  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
E0731 19:42:34.593590  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
E0731 19:42:34.613924  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
E0731 19:42:34.654263  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
E0731 19:42:34.734620  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
E0731 19:42:34.895067  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
E0731 19:42:35.215624  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-171449 --log_dir /tmp/nospam-171449 stop: (1.587587557s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-171449 --log_dir /tmp/nospam-171449 stop
E0731 19:42:35.855826  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
E0731 19:42:37.136349  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-171449 --log_dir /tmp/nospam-171449 stop: (1.594690988s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-171449 --log_dir /tmp/nospam-171449 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-171449 --log_dir /tmp/nospam-171449 stop: (2.060177602s)
--- PASS: TestErrorSpam/stop (5.24s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19355-121704/.minikube/files/etc/test/nested/copy/128891/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (67.02s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-904202 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0731 19:42:44.817582  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
E0731 19:42:55.058548  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
E0731 19:43:15.539136  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-904202 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m7.018848819s)
--- PASS: TestFunctional/serial/StartWithProxy (67.02s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.96s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-904202 --alsologtostderr -v=8
E0731 19:43:56.499736  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-904202 --alsologtostderr -v=8: (35.955672318s)
functional_test.go:659: soft start took 35.956344236s for "functional-904202" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.96s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-904202 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-904202 cache add registry.k8s.io/pause:3.1: (1.000104726s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-904202 cache add registry.k8s.io/pause:3.3: (1.186195669s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-904202 cache add registry.k8s.io/pause:latest: (1.003053437s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-904202 /tmp/TestFunctionalserialCacheCmdcacheadd_local1399468475/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 cache add minikube-local-cache-test:functional-904202
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-904202 cache add minikube-local-cache-test:functional-904202: (1.856358316s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 cache delete minikube-local-cache-test:functional-904202
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-904202
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-904202 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (201.46666ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 kubectl -- --context functional-904202 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-904202 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.35s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-904202 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-904202 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.346007694s)
functional_test.go:757: restart took 31.346127914s for "functional-904202" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (31.35s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-904202 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-904202 logs: (1.454106557s)
--- PASS: TestFunctional/serial/LogsCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 logs --file /tmp/TestFunctionalserialLogsFileCmd2822969964/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-904202 logs --file /tmp/TestFunctionalserialLogsFileCmd2822969964/001/logs.txt: (1.50059092s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.9s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-904202 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-904202
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-904202: exit status 115 (263.810695ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.96:32452 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-904202 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-904202 delete -f testdata/invalidsvc.yaml: (1.452650862s)
--- PASS: TestFunctional/serial/InvalidService (4.90s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-904202 config get cpus: exit status 14 (55.878703ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-904202 config get cpus: exit status 14 (43.441562ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (18.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-904202 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-904202 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 137791: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (18.27s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-904202 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-904202 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (179.100326ms)

                                                
                                                
-- stdout --
	* [functional-904202] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19355
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 19:45:11.263634  137539 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:45:11.263788  137539 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:45:11.263828  137539 out.go:304] Setting ErrFile to fd 2...
	I0731 19:45:11.263840  137539 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:45:11.264142  137539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 19:45:11.264701  137539 out.go:298] Setting JSON to false
	I0731 19:45:11.265941  137539 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5247,"bootTime":1722449864,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 19:45:11.266024  137539 start.go:139] virtualization: kvm guest
	I0731 19:45:11.268023  137539 out.go:177] * [functional-904202] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 19:45:11.275784  137539 notify.go:220] Checking for updates...
	I0731 19:45:11.276951  137539 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 19:45:11.278331  137539 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 19:45:11.279872  137539 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 19:45:11.281115  137539 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 19:45:11.282446  137539 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 19:45:11.283841  137539 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 19:45:11.285735  137539 config.go:182] Loaded profile config "functional-904202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:45:11.286421  137539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:45:11.286465  137539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:45:11.307707  137539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41449
	I0731 19:45:11.308178  137539 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:45:11.308808  137539 main.go:141] libmachine: Using API Version  1
	I0731 19:45:11.308826  137539 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:45:11.309268  137539 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:45:11.309608  137539 main.go:141] libmachine: (functional-904202) Calling .DriverName
	I0731 19:45:11.309912  137539 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 19:45:11.310338  137539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:45:11.310372  137539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:45:11.328561  137539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36991
	I0731 19:45:11.329115  137539 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:45:11.329783  137539 main.go:141] libmachine: Using API Version  1
	I0731 19:45:11.329800  137539 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:45:11.330243  137539 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:45:11.330440  137539 main.go:141] libmachine: (functional-904202) Calling .DriverName
	I0731 19:45:11.368226  137539 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 19:45:11.369949  137539 start.go:297] selected driver: kvm2
	I0731 19:45:11.369973  137539 start.go:901] validating driver "kvm2" against &{Name:functional-904202 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-904202 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.96 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:45:11.370117  137539 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 19:45:11.373409  137539 out.go:177] 
	W0731 19:45:11.375070  137539 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0731 19:45:11.376319  137539 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-904202 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-904202 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-904202 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (148.663432ms)

                                                
                                                
-- stdout --
	* [functional-904202] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19355
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 19:45:11.101997  137505 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:45:11.102309  137505 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:45:11.102323  137505 out.go:304] Setting ErrFile to fd 2...
	I0731 19:45:11.102330  137505 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:45:11.102774  137505 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 19:45:11.103469  137505 out.go:298] Setting JSON to false
	I0731 19:45:11.104783  137505 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5247,"bootTime":1722449864,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 19:45:11.104867  137505 start.go:139] virtualization: kvm guest
	I0731 19:45:11.107274  137505 out.go:177] * [functional-904202] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0731 19:45:11.109401  137505 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 19:45:11.109407  137505 notify.go:220] Checking for updates...
	I0731 19:45:11.112323  137505 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 19:45:11.113768  137505 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 19:45:11.115265  137505 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 19:45:11.116800  137505 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 19:45:11.118235  137505 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 19:45:11.120071  137505 config.go:182] Loaded profile config "functional-904202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:45:11.120716  137505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:45:11.120787  137505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:45:11.136376  137505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34305
	I0731 19:45:11.136800  137505 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:45:11.137391  137505 main.go:141] libmachine: Using API Version  1
	I0731 19:45:11.137412  137505 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:45:11.137763  137505 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:45:11.137950  137505 main.go:141] libmachine: (functional-904202) Calling .DriverName
	I0731 19:45:11.138207  137505 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 19:45:11.138633  137505 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:45:11.138695  137505 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:45:11.153609  137505 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36901
	I0731 19:45:11.153999  137505 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:45:11.154462  137505 main.go:141] libmachine: Using API Version  1
	I0731 19:45:11.154497  137505 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:45:11.154804  137505 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:45:11.154979  137505 main.go:141] libmachine: (functional-904202) Calling .DriverName
	I0731 19:45:11.188671  137505 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0731 19:45:11.190159  137505 start.go:297] selected driver: kvm2
	I0731 19:45:11.190174  137505 start.go:901] validating driver "kvm2" against &{Name:functional-904202 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-904202 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.96 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:45:11.190300  137505 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 19:45:11.192633  137505 out.go:177] 
	W0731 19:45:11.194448  137505 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0731 19:45:11.195443  137505 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-904202 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-904202 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-jblvc" [48d4690d-d0cb-45b8-a940-a57bf33934fb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-jblvc" [48d4690d-d0cb-45b8-a940-a57bf33934fb] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003941302s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.96:31055
functional_test.go:1671: http://192.168.39.96:31055: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-jblvc

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.96:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.96:31055
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [eedef46a-010f-426e-a6a9-ee06bc5bd414] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004624884s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-904202 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-904202 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-904202 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-904202 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4fe70dde-378c-4e4a-a2d7-0bfdf0b2fb52] Pending
helpers_test.go:344: "sp-pod" [4fe70dde-378c-4e4a-a2d7-0bfdf0b2fb52] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0731 19:45:18.420766  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [4fe70dde-378c-4e4a-a2d7-0bfdf0b2fb52] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.004617997s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-904202 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-904202 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-904202 delete -f testdata/storage-provisioner/pod.yaml: (1.891038374s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-904202 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b8f6fb43-7c07-417f-adec-16cc39b2d218] Pending
helpers_test.go:344: "sp-pod" [b8f6fb43-7c07-417f-adec-16cc39b2d218] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b8f6fb43-7c07-417f-adec-16cc39b2d218] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003694309s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-904202 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.71s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 ssh -n functional-904202 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 cp functional-904202:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1184384523/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 ssh -n functional-904202 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 ssh -n functional-904202 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-904202 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-k8ksz" [9352ba56-9e44-4fcc-b184-e9b221019244] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-k8ksz" [9352ba56-9e44-4fcc-b184-e9b221019244] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 25.004365939s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-904202 exec mysql-64454c8b5c-k8ksz -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-904202 exec mysql-64454c8b5c-k8ksz -- mysql -ppassword -e "show databases;": exit status 1 (131.532929ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-904202 exec mysql-64454c8b5c-k8ksz -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.88s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/128891/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 ssh "sudo cat /etc/test/nested/copy/128891/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/128891.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 ssh "sudo cat /etc/ssl/certs/128891.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/128891.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 ssh "sudo cat /usr/share/ca-certificates/128891.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/1288912.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 ssh "sudo cat /etc/ssl/certs/1288912.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/1288912.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 ssh "sudo cat /usr/share/ca-certificates/1288912.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-904202 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-904202 ssh "sudo systemctl is-active docker": exit status 1 (208.746447ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-904202 ssh "sudo systemctl is-active containerd": exit status 1 (203.233638ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-904202 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-904202 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-smvml" [246cc75b-5823-445a-92a2-51a00f6af05f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-smvml" [246cc75b-5823-445a-92a2-51a00f6af05f] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.00436281s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "237.969684ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "49.299813ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-904202 /tmp/TestFunctionalparallelMountCmdany-port3057746067/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722455110336767187" to /tmp/TestFunctionalparallelMountCmdany-port3057746067/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722455110336767187" to /tmp/TestFunctionalparallelMountCmdany-port3057746067/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722455110336767187" to /tmp/TestFunctionalparallelMountCmdany-port3057746067/001/test-1722455110336767187
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-904202 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (207.451046ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 31 19:45 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 31 19:45 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 31 19:45 test-1722455110336767187
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 ssh cat /mount-9p/test-1722455110336767187
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-904202 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [30fb1915-dbf5-4f26-bfdf-c728abc37cee] Pending
helpers_test.go:344: "busybox-mount" [30fb1915-dbf5-4f26-bfdf-c728abc37cee] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [30fb1915-dbf5-4f26-bfdf-c728abc37cee] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [30fb1915-dbf5-4f26-bfdf-c728abc37cee] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.004351175s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-904202 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-904202 /tmp/TestFunctionalparallelMountCmdany-port3057746067/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.93s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "248.362995ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "56.93885ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-904202 /tmp/TestFunctionalparallelMountCmdspecific-port3141152692/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-904202 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (211.996916ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-904202 /tmp/TestFunctionalparallelMountCmdspecific-port3141152692/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-904202 ssh "sudo umount -f /mount-9p": exit status 1 (190.641325ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-904202 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-904202 /tmp/TestFunctionalparallelMountCmdspecific-port3141152692/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 service list -o json
functional_test.go:1490: Took "454.042566ms" to run "out/minikube-linux-amd64 -p functional-904202 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.96:30215
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.96:30215
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-904202 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2232135398/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-904202 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2232135398/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-904202 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2232135398/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-904202 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-904202 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2232135398/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-904202 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2232135398/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-904202 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2232135398/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-904202 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-904202
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-904202
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-904202 image ls --format short --alsologtostderr:
I0731 19:45:37.292693  139412 out.go:291] Setting OutFile to fd 1 ...
I0731 19:45:37.292825  139412 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 19:45:37.292838  139412 out.go:304] Setting ErrFile to fd 2...
I0731 19:45:37.292850  139412 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 19:45:37.293180  139412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
I0731 19:45:37.293921  139412 config.go:182] Loaded profile config "functional-904202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 19:45:37.294076  139412 config.go:182] Loaded profile config "functional-904202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 19:45:37.294631  139412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 19:45:37.294684  139412 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 19:45:37.309877  139412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40501
I0731 19:45:37.310418  139412 main.go:141] libmachine: () Calling .GetVersion
I0731 19:45:37.311030  139412 main.go:141] libmachine: Using API Version  1
I0731 19:45:37.311054  139412 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 19:45:37.311330  139412 main.go:141] libmachine: () Calling .GetMachineName
I0731 19:45:37.311562  139412 main.go:141] libmachine: (functional-904202) Calling .GetState
I0731 19:45:37.313689  139412 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 19:45:37.313735  139412 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 19:45:37.328549  139412 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38425
I0731 19:45:37.329021  139412 main.go:141] libmachine: () Calling .GetVersion
I0731 19:45:37.329544  139412 main.go:141] libmachine: Using API Version  1
I0731 19:45:37.329566  139412 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 19:45:37.329915  139412 main.go:141] libmachine: () Calling .GetMachineName
I0731 19:45:37.330094  139412 main.go:141] libmachine: (functional-904202) Calling .DriverName
I0731 19:45:37.330313  139412 ssh_runner.go:195] Run: systemctl --version
I0731 19:45:37.330341  139412 main.go:141] libmachine: (functional-904202) Calling .GetSSHHostname
I0731 19:45:37.333488  139412 main.go:141] libmachine: (functional-904202) DBG | domain functional-904202 has defined MAC address 52:54:00:2c:ae:4a in network mk-functional-904202
I0731 19:45:37.334023  139412 main.go:141] libmachine: (functional-904202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ae:4a", ip: ""} in network mk-functional-904202: {Iface:virbr1 ExpiryTime:2024-07-31 20:42:54 +0000 UTC Type:0 Mac:52:54:00:2c:ae:4a Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:functional-904202 Clientid:01:52:54:00:2c:ae:4a}
I0731 19:45:37.334044  139412 main.go:141] libmachine: (functional-904202) DBG | domain functional-904202 has defined IP address 192.168.39.96 and MAC address 52:54:00:2c:ae:4a in network mk-functional-904202
I0731 19:45:37.334317  139412 main.go:141] libmachine: (functional-904202) Calling .GetSSHPort
I0731 19:45:37.334494  139412 main.go:141] libmachine: (functional-904202) Calling .GetSSHKeyPath
I0731 19:45:37.334637  139412 main.go:141] libmachine: (functional-904202) Calling .GetSSHUsername
I0731 19:45:37.334784  139412 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/functional-904202/id_rsa Username:docker}
I0731 19:45:37.412052  139412 ssh_runner.go:195] Run: sudo crictl images --output json
I0731 19:45:37.478053  139412 main.go:141] libmachine: Making call to close driver server
I0731 19:45:37.478071  139412 main.go:141] libmachine: (functional-904202) Calling .Close
I0731 19:45:37.478365  139412 main.go:141] libmachine: Successfully made call to close driver server
I0731 19:45:37.478386  139412 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 19:45:37.478462  139412 main.go:141] libmachine: (functional-904202) DBG | Closing plugin on server side
I0731 19:45:37.478524  139412 main.go:141] libmachine: Making call to close driver server
I0731 19:45:37.478552  139412 main.go:141] libmachine: (functional-904202) Calling .Close
I0731 19:45:37.478839  139412 main.go:141] libmachine: Successfully made call to close driver server
I0731 19:45:37.478856  139412 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 19:45:37.478856  139412 main.go:141] libmachine: (functional-904202) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-904202 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-scheduler          | v1.30.3            | 3edc18e7b7672 | 63.1MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 1f6d574d502f3 | 118MB  |
| registry.k8s.io/kube-proxy              | v1.30.3            | 55bb025d2cfa5 | 86MB   |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/kicbase/echo-server           | functional-904202  | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5cc3abe5717db | 87.2MB |
| docker.io/library/nginx                 | latest             | a72860cb95fd5 | 192MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| localhost/minikube-local-cache-test     | functional-904202  | 594b0d318d162 | 3.33kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 76932a3b37d7e | 112MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-904202 image ls --format table --alsologtostderr:
I0731 19:45:37.765527  139541 out.go:291] Setting OutFile to fd 1 ...
I0731 19:45:37.765637  139541 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 19:45:37.765647  139541 out.go:304] Setting ErrFile to fd 2...
I0731 19:45:37.765654  139541 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 19:45:37.765867  139541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
I0731 19:45:37.766425  139541 config.go:182] Loaded profile config "functional-904202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 19:45:37.766522  139541 config.go:182] Loaded profile config "functional-904202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 19:45:37.766904  139541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 19:45:37.766945  139541 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 19:45:37.782491  139541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36885
I0731 19:45:37.782975  139541 main.go:141] libmachine: () Calling .GetVersion
I0731 19:45:37.783588  139541 main.go:141] libmachine: Using API Version  1
I0731 19:45:37.783615  139541 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 19:45:37.783959  139541 main.go:141] libmachine: () Calling .GetMachineName
I0731 19:45:37.784178  139541 main.go:141] libmachine: (functional-904202) Calling .GetState
I0731 19:45:37.786123  139541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 19:45:37.786172  139541 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 19:45:37.801272  139541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35267
I0731 19:45:37.801763  139541 main.go:141] libmachine: () Calling .GetVersion
I0731 19:45:37.802198  139541 main.go:141] libmachine: Using API Version  1
I0731 19:45:37.802223  139541 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 19:45:37.802555  139541 main.go:141] libmachine: () Calling .GetMachineName
I0731 19:45:37.802782  139541 main.go:141] libmachine: (functional-904202) Calling .DriverName
I0731 19:45:37.803004  139541 ssh_runner.go:195] Run: systemctl --version
I0731 19:45:37.803038  139541 main.go:141] libmachine: (functional-904202) Calling .GetSSHHostname
I0731 19:45:37.806126  139541 main.go:141] libmachine: (functional-904202) DBG | domain functional-904202 has defined MAC address 52:54:00:2c:ae:4a in network mk-functional-904202
I0731 19:45:37.806573  139541 main.go:141] libmachine: (functional-904202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ae:4a", ip: ""} in network mk-functional-904202: {Iface:virbr1 ExpiryTime:2024-07-31 20:42:54 +0000 UTC Type:0 Mac:52:54:00:2c:ae:4a Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:functional-904202 Clientid:01:52:54:00:2c:ae:4a}
I0731 19:45:37.806604  139541 main.go:141] libmachine: (functional-904202) DBG | domain functional-904202 has defined IP address 192.168.39.96 and MAC address 52:54:00:2c:ae:4a in network mk-functional-904202
I0731 19:45:37.806708  139541 main.go:141] libmachine: (functional-904202) Calling .GetSSHPort
I0731 19:45:37.806881  139541 main.go:141] libmachine: (functional-904202) Calling .GetSSHKeyPath
I0731 19:45:37.807020  139541 main.go:141] libmachine: (functional-904202) Calling .GetSSHUsername
I0731 19:45:37.807149  139541 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/functional-904202/id_rsa Username:docker}
I0731 19:45:37.930078  139541 ssh_runner.go:195] Run: sudo crictl images --output json
I0731 19:45:38.170574  139541 main.go:141] libmachine: Making call to close driver server
I0731 19:45:38.170617  139541 main.go:141] libmachine: (functional-904202) Calling .Close
I0731 19:45:38.170976  139541 main.go:141] libmachine: (functional-904202) DBG | Closing plugin on server side
I0731 19:45:38.171024  139541 main.go:141] libmachine: Successfully made call to close driver server
I0731 19:45:38.171045  139541 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 19:45:38.171064  139541 main.go:141] libmachine: Making call to close driver server
I0731 19:45:38.171075  139541 main.go:141] libmachine: (functional-904202) Calling .Close
I0731 19:45:38.171323  139541 main.go:141] libmachine: Successfully made call to close driver server
I0731 19:45:38.171338  139541 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-904202 image ls --format json --alsologtostderr:
[{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55
d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c","registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117609954"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"85953945"},{"id":"5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115","docker.io/k
indest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"87165492"},{"id":"594b0d318d162497a17eb3a4bdb1ab5ba6324ccc19c1ab99cbc5530efc31ea8d","repoDigests":["localhost/minikube-local-cache-test@sha256:aa82e0ec6cf0d1413299653a1796ae70247c7294e15185b2665e1deccf27c52c"],"repoTags":["localhost/minikube-local-cache-test:functional-904202"],"size":"3330"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7","registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"]
,"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"112198984"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:functional-904202"],"size":"4943877"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDig
ests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repoDigests":["docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c","docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc"],"repoTags":["docker.io/library/nginx:latest"],"size":"191750286"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd
422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"63051080"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker
.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-904202 image ls --format json --alsologtostderr:
I0731 19:45:37.532203  139486 out.go:291] Setting OutFile to fd 1 ...
I0731 19:45:37.532300  139486 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 19:45:37.532304  139486 out.go:304] Setting ErrFile to fd 2...
I0731 19:45:37.532308  139486 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 19:45:37.532511  139486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
I0731 19:45:37.533045  139486 config.go:182] Loaded profile config "functional-904202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 19:45:37.533140  139486 config.go:182] Loaded profile config "functional-904202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 19:45:37.533522  139486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 19:45:37.533570  139486 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 19:45:37.548280  139486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36467
I0731 19:45:37.548721  139486 main.go:141] libmachine: () Calling .GetVersion
I0731 19:45:37.549319  139486 main.go:141] libmachine: Using API Version  1
I0731 19:45:37.549368  139486 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 19:45:37.549684  139486 main.go:141] libmachine: () Calling .GetMachineName
I0731 19:45:37.549891  139486 main.go:141] libmachine: (functional-904202) Calling .GetState
I0731 19:45:37.551724  139486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 19:45:37.551758  139486 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 19:45:37.566468  139486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44271
I0731 19:45:37.566951  139486 main.go:141] libmachine: () Calling .GetVersion
I0731 19:45:37.567515  139486 main.go:141] libmachine: Using API Version  1
I0731 19:45:37.567546  139486 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 19:45:37.567869  139486 main.go:141] libmachine: () Calling .GetMachineName
I0731 19:45:37.568070  139486 main.go:141] libmachine: (functional-904202) Calling .DriverName
I0731 19:45:37.568293  139486 ssh_runner.go:195] Run: systemctl --version
I0731 19:45:37.568325  139486 main.go:141] libmachine: (functional-904202) Calling .GetSSHHostname
I0731 19:45:37.571498  139486 main.go:141] libmachine: (functional-904202) DBG | domain functional-904202 has defined MAC address 52:54:00:2c:ae:4a in network mk-functional-904202
I0731 19:45:37.571944  139486 main.go:141] libmachine: (functional-904202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ae:4a", ip: ""} in network mk-functional-904202: {Iface:virbr1 ExpiryTime:2024-07-31 20:42:54 +0000 UTC Type:0 Mac:52:54:00:2c:ae:4a Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:functional-904202 Clientid:01:52:54:00:2c:ae:4a}
I0731 19:45:37.571970  139486 main.go:141] libmachine: (functional-904202) DBG | domain functional-904202 has defined IP address 192.168.39.96 and MAC address 52:54:00:2c:ae:4a in network mk-functional-904202
I0731 19:45:37.572117  139486 main.go:141] libmachine: (functional-904202) Calling .GetSSHPort
I0731 19:45:37.572334  139486 main.go:141] libmachine: (functional-904202) Calling .GetSSHKeyPath
I0731 19:45:37.572510  139486 main.go:141] libmachine: (functional-904202) Calling .GetSSHUsername
I0731 19:45:37.572683  139486 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/functional-904202/id_rsa Username:docker}
I0731 19:45:37.656418  139486 ssh_runner.go:195] Run: sudo crictl images --output json
I0731 19:45:37.711977  139486 main.go:141] libmachine: Making call to close driver server
I0731 19:45:37.711988  139486 main.go:141] libmachine: (functional-904202) Calling .Close
I0731 19:45:37.712415  139486 main.go:141] libmachine: Successfully made call to close driver server
I0731 19:45:37.712407  139486 main.go:141] libmachine: (functional-904202) DBG | Closing plugin on server side
I0731 19:45:37.712431  139486 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 19:45:37.712464  139486 main.go:141] libmachine: Making call to close driver server
I0731 19:45:37.712476  139486 main.go:141] libmachine: (functional-904202) Calling .Close
I0731 19:45:37.712728  139486 main.go:141] libmachine: Successfully made call to close driver server
I0731 19:45:37.712746  139486 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-904202 image ls --format yaml --alsologtostderr:
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:functional-904202
size: "4943877"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests:
- docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c
- docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc
repoTags:
- docker.io/library/nginx:latest
size: "191750286"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
- registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117609954"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "85953945"
- id: 5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "87165492"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
- registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "112198984"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 594b0d318d162497a17eb3a4bdb1ab5ba6324ccc19c1ab99cbc5530efc31ea8d
repoDigests:
- localhost/minikube-local-cache-test@sha256:aa82e0ec6cf0d1413299653a1796ae70247c7294e15185b2665e1deccf27c52c
repoTags:
- localhost/minikube-local-cache-test:functional-904202
size: "3330"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "63051080"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-904202 image ls --format yaml --alsologtostderr:
I0731 19:45:37.292693  139413 out.go:291] Setting OutFile to fd 1 ...
I0731 19:45:37.292822  139413 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 19:45:37.292834  139413 out.go:304] Setting ErrFile to fd 2...
I0731 19:45:37.292841  139413 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 19:45:37.293117  139413 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
I0731 19:45:37.293917  139413 config.go:182] Loaded profile config "functional-904202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 19:45:37.294076  139413 config.go:182] Loaded profile config "functional-904202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 19:45:37.294621  139413 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 19:45:37.294683  139413 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 19:45:37.309552  139413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39365
I0731 19:45:37.310015  139413 main.go:141] libmachine: () Calling .GetVersion
I0731 19:45:37.310654  139413 main.go:141] libmachine: Using API Version  1
I0731 19:45:37.310675  139413 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 19:45:37.311054  139413 main.go:141] libmachine: () Calling .GetMachineName
I0731 19:45:37.311447  139413 main.go:141] libmachine: (functional-904202) Calling .GetState
I0731 19:45:37.313688  139413 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 19:45:37.313732  139413 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 19:45:37.328556  139413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46341
I0731 19:45:37.329087  139413 main.go:141] libmachine: () Calling .GetVersion
I0731 19:45:37.329685  139413 main.go:141] libmachine: Using API Version  1
I0731 19:45:37.329721  139413 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 19:45:37.330052  139413 main.go:141] libmachine: () Calling .GetMachineName
I0731 19:45:37.330221  139413 main.go:141] libmachine: (functional-904202) Calling .DriverName
I0731 19:45:37.330411  139413 ssh_runner.go:195] Run: systemctl --version
I0731 19:45:37.330445  139413 main.go:141] libmachine: (functional-904202) Calling .GetSSHHostname
I0731 19:45:37.333526  139413 main.go:141] libmachine: (functional-904202) DBG | domain functional-904202 has defined MAC address 52:54:00:2c:ae:4a in network mk-functional-904202
I0731 19:45:37.333939  139413 main.go:141] libmachine: (functional-904202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ae:4a", ip: ""} in network mk-functional-904202: {Iface:virbr1 ExpiryTime:2024-07-31 20:42:54 +0000 UTC Type:0 Mac:52:54:00:2c:ae:4a Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:functional-904202 Clientid:01:52:54:00:2c:ae:4a}
I0731 19:45:37.333961  139413 main.go:141] libmachine: (functional-904202) DBG | domain functional-904202 has defined IP address 192.168.39.96 and MAC address 52:54:00:2c:ae:4a in network mk-functional-904202
I0731 19:45:37.334192  139413 main.go:141] libmachine: (functional-904202) Calling .GetSSHPort
I0731 19:45:37.334362  139413 main.go:141] libmachine: (functional-904202) Calling .GetSSHKeyPath
I0731 19:45:37.334565  139413 main.go:141] libmachine: (functional-904202) Calling .GetSSHUsername
I0731 19:45:37.334720  139413 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/functional-904202/id_rsa Username:docker}
I0731 19:45:37.412860  139413 ssh_runner.go:195] Run: sudo crictl images --output json
I0731 19:45:37.462591  139413 main.go:141] libmachine: Making call to close driver server
I0731 19:45:37.462611  139413 main.go:141] libmachine: (functional-904202) Calling .Close
I0731 19:45:37.462930  139413 main.go:141] libmachine: Successfully made call to close driver server
I0731 19:45:37.462967  139413 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 19:45:37.462980  139413 main.go:141] libmachine: Making call to close driver server
I0731 19:45:37.462988  139413 main.go:141] libmachine: (functional-904202) Calling .Close
I0731 19:45:37.463339  139413 main.go:141] libmachine: (functional-904202) DBG | Closing plugin on server side
I0731 19:45:37.463353  139413 main.go:141] libmachine: Successfully made call to close driver server
I0731 19:45:37.463368  139413 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.917694243s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-904202
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 image load --daemon docker.io/kicbase/echo-server:functional-904202 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-904202 image load --daemon docker.io/kicbase/echo-server:functional-904202 --alsologtostderr: (1.636745567s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 image load --daemon docker.io/kicbase/echo-server:functional-904202 --alsologtostderr
2024/07/31 19:45:29 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-904202
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 image load --daemon docker.io/kicbase/echo-server:functional-904202 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 image save docker.io/kicbase/echo-server:functional-904202 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-904202 image save docker.io/kicbase/echo-server:functional-904202 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (3.448774344s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 image rm docker.io/kicbase/echo-server:functional-904202 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-904202
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-904202 image save --daemon docker.io/kicbase/echo-server:functional-904202 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-904202
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-904202
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-904202
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-904202
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (277.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-235073 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0731 19:47:34.580126  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
E0731 19:48:02.262515  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
E0731 19:50:09.825433  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
E0731 19:50:09.830761  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
E0731 19:50:09.841081  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
E0731 19:50:09.861494  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
E0731 19:50:09.901830  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
E0731 19:50:09.982186  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
E0731 19:50:10.142569  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
E0731 19:50:10.463681  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
E0731 19:50:11.104598  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
E0731 19:50:12.384809  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
E0731 19:50:14.945043  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
E0731 19:50:20.065806  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
E0731 19:50:30.306450  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-235073 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m36.560165379s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (277.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-235073 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-235073 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-235073 -- rollout status deployment/busybox: (4.302584013s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-235073 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-235073 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-235073 -- exec busybox-fc5497c4f-d7lpt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-235073 -- exec busybox-fc5497c4f-g9vds -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-235073 -- exec busybox-fc5497c4f-wqc9h -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-235073 -- exec busybox-fc5497c4f-d7lpt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-235073 -- exec busybox-fc5497c4f-g9vds -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-235073 -- exec busybox-fc5497c4f-wqc9h -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-235073 -- exec busybox-fc5497c4f-d7lpt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-235073 -- exec busybox-fc5497c4f-g9vds -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-235073 -- exec busybox-fc5497c4f-wqc9h -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-235073 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-235073 -- exec busybox-fc5497c4f-d7lpt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-235073 -- exec busybox-fc5497c4f-d7lpt -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-235073 -- exec busybox-fc5497c4f-g9vds -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-235073 -- exec busybox-fc5497c4f-g9vds -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-235073 -- exec busybox-fc5497c4f-wqc9h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-235073 -- exec busybox-fc5497c4f-wqc9h -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-235073 -v=7 --alsologtostderr
E0731 19:50:50.786947  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
E0731 19:51:31.747578  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-235073 -v=7 --alsologtostderr: (54.727490294s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-235073 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 cp testdata/cp-test.txt ha-235073:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 ssh -n ha-235073 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 cp ha-235073:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3796763680/001/cp-test_ha-235073.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 ssh -n ha-235073 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 cp ha-235073:/home/docker/cp-test.txt ha-235073-m02:/home/docker/cp-test_ha-235073_ha-235073-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 ssh -n ha-235073 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 ssh -n ha-235073-m02 "sudo cat /home/docker/cp-test_ha-235073_ha-235073-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 cp ha-235073:/home/docker/cp-test.txt ha-235073-m03:/home/docker/cp-test_ha-235073_ha-235073-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 ssh -n ha-235073 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 ssh -n ha-235073-m03 "sudo cat /home/docker/cp-test_ha-235073_ha-235073-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 cp ha-235073:/home/docker/cp-test.txt ha-235073-m04:/home/docker/cp-test_ha-235073_ha-235073-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 ssh -n ha-235073 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 ssh -n ha-235073-m04 "sudo cat /home/docker/cp-test_ha-235073_ha-235073-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 cp testdata/cp-test.txt ha-235073-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 ssh -n ha-235073-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 cp ha-235073-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3796763680/001/cp-test_ha-235073-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 ssh -n ha-235073-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 cp ha-235073-m02:/home/docker/cp-test.txt ha-235073:/home/docker/cp-test_ha-235073-m02_ha-235073.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 ssh -n ha-235073-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 ssh -n ha-235073 "sudo cat /home/docker/cp-test_ha-235073-m02_ha-235073.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 cp ha-235073-m02:/home/docker/cp-test.txt ha-235073-m03:/home/docker/cp-test_ha-235073-m02_ha-235073-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 ssh -n ha-235073-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 ssh -n ha-235073-m03 "sudo cat /home/docker/cp-test_ha-235073-m02_ha-235073-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 cp ha-235073-m02:/home/docker/cp-test.txt ha-235073-m04:/home/docker/cp-test_ha-235073-m02_ha-235073-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 ssh -n ha-235073-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 ssh -n ha-235073-m04 "sudo cat /home/docker/cp-test_ha-235073-m02_ha-235073-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 cp testdata/cp-test.txt ha-235073-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 ssh -n ha-235073-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 cp ha-235073-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3796763680/001/cp-test_ha-235073-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 ssh -n ha-235073-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 cp ha-235073-m03:/home/docker/cp-test.txt ha-235073:/home/docker/cp-test_ha-235073-m03_ha-235073.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 ssh -n ha-235073-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 ssh -n ha-235073 "sudo cat /home/docker/cp-test_ha-235073-m03_ha-235073.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 cp ha-235073-m03:/home/docker/cp-test.txt ha-235073-m02:/home/docker/cp-test_ha-235073-m03_ha-235073-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 ssh -n ha-235073-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 ssh -n ha-235073-m02 "sudo cat /home/docker/cp-test_ha-235073-m03_ha-235073-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 cp ha-235073-m03:/home/docker/cp-test.txt ha-235073-m04:/home/docker/cp-test_ha-235073-m03_ha-235073-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 ssh -n ha-235073-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 ssh -n ha-235073-m04 "sudo cat /home/docker/cp-test_ha-235073-m03_ha-235073-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 cp testdata/cp-test.txt ha-235073-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 ssh -n ha-235073-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 cp ha-235073-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3796763680/001/cp-test_ha-235073-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 ssh -n ha-235073-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 cp ha-235073-m04:/home/docker/cp-test.txt ha-235073:/home/docker/cp-test_ha-235073-m04_ha-235073.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 ssh -n ha-235073-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 ssh -n ha-235073 "sudo cat /home/docker/cp-test_ha-235073-m04_ha-235073.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 cp ha-235073-m04:/home/docker/cp-test.txt ha-235073-m02:/home/docker/cp-test_ha-235073-m04_ha-235073-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 ssh -n ha-235073-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 ssh -n ha-235073-m02 "sudo cat /home/docker/cp-test_ha-235073-m04_ha-235073-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 cp ha-235073-m04:/home/docker/cp-test.txt ha-235073-m03:/home/docker/cp-test_ha-235073-m04_ha-235073-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 ssh -n ha-235073-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 ssh -n ha-235073-m03 "sudo cat /home/docker/cp-test_ha-235073-m04_ha-235073-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.459143684s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-235073 node delete m03 -v=7 --alsologtostderr: (16.445180889s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (345.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-235073 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0731 20:05:09.826524  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
E0731 20:06:32.869521  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
E0731 20:07:34.578683  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
E0731 20:10:09.825367  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-235073 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m44.667109329s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (345.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (81.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-235073 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-235073 --control-plane -v=7 --alsologtostderr: (1m20.460553832s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-235073 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (81.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (55.99s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-694217 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0731 20:12:34.580061  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-694217 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (55.98692651s)
--- PASS: TestJSONOutput/start/Command (55.99s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-694217 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-694217 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (9.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-694217 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-694217 --output=json --user=testUser: (9.355764767s)
--- PASS: TestJSONOutput/stop/Command (9.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-629125 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-629125 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (60.157102ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f475538d-fb54-40ff-82c9-b627cb772c39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-629125] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"970a536a-5691-41fb-96a0-ace0db8253d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19355"}}
	{"specversion":"1.0","id":"5afd2164-4f0e-419e-ba02-9f9f7c0a4cdb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b66cb4cd-d5c2-4e8b-92ba-2bbfa0bdab7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig"}}
	{"specversion":"1.0","id":"4feca2b5-4b2a-474f-87e7-3e83134ddb86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube"}}
	{"specversion":"1.0","id":"fae21ca2-2eef-4d64-9fa4-6f74d86cd6d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b64d2c3a-a653-41d0-b8fa-31a5ca7f94bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5e48b893-d72c-4634-ab58-5c72ba798303","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-629125" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-629125
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (93.54s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-081777 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-081777 --driver=kvm2  --container-runtime=crio: (44.737613226s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-085159 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-085159 --driver=kvm2  --container-runtime=crio: (46.225232772s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-081777
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-085159
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-085159" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-085159
helpers_test.go:175: Cleaning up "first-081777" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-081777
--- PASS: TestMinikubeProfile (93.54s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.68s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-657075 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-657075 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.675282084s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-657075 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-657075 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.81s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-676228 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0731 20:15:09.826092  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-676228 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.806945794s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-676228 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-676228 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-657075 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-676228 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-676228 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-676228
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-676228: (1.275574393s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.46s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-676228
E0731 20:15:37.625915  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-676228: (23.461023087s)
--- PASS: TestMountStart/serial/RestartStopped (24.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-676228 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-676228 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (125.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-094885 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0731 20:17:34.578132  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-094885 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m5.511682458s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (125.91s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-094885 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-094885 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-094885 -- rollout status deployment/busybox: (4.126492803s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-094885 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-094885 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-094885 -- exec busybox-fc5497c4f-2w5wm -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-094885 -- exec busybox-fc5497c4f-wwlpt -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-094885 -- exec busybox-fc5497c4f-2w5wm -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-094885 -- exec busybox-fc5497c4f-wwlpt -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-094885 -- exec busybox-fc5497c4f-2w5wm -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-094885 -- exec busybox-fc5497c4f-wwlpt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.58s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-094885 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-094885 -- exec busybox-fc5497c4f-2w5wm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-094885 -- exec busybox-fc5497c4f-2w5wm -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-094885 -- exec busybox-fc5497c4f-wwlpt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-094885 -- exec busybox-fc5497c4f-wwlpt -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (52.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-094885 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-094885 -v 3 --alsologtostderr: (51.823686413s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (52.38s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-094885 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 cp testdata/cp-test.txt multinode-094885:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 ssh -n multinode-094885 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 cp multinode-094885:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4009504673/001/cp-test_multinode-094885.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 ssh -n multinode-094885 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 cp multinode-094885:/home/docker/cp-test.txt multinode-094885-m02:/home/docker/cp-test_multinode-094885_multinode-094885-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 ssh -n multinode-094885 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 ssh -n multinode-094885-m02 "sudo cat /home/docker/cp-test_multinode-094885_multinode-094885-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 cp multinode-094885:/home/docker/cp-test.txt multinode-094885-m03:/home/docker/cp-test_multinode-094885_multinode-094885-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 ssh -n multinode-094885 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 ssh -n multinode-094885-m03 "sudo cat /home/docker/cp-test_multinode-094885_multinode-094885-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 cp testdata/cp-test.txt multinode-094885-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 ssh -n multinode-094885-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 cp multinode-094885-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4009504673/001/cp-test_multinode-094885-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 ssh -n multinode-094885-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 cp multinode-094885-m02:/home/docker/cp-test.txt multinode-094885:/home/docker/cp-test_multinode-094885-m02_multinode-094885.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 ssh -n multinode-094885-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 ssh -n multinode-094885 "sudo cat /home/docker/cp-test_multinode-094885-m02_multinode-094885.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 cp multinode-094885-m02:/home/docker/cp-test.txt multinode-094885-m03:/home/docker/cp-test_multinode-094885-m02_multinode-094885-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 ssh -n multinode-094885-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 ssh -n multinode-094885-m03 "sudo cat /home/docker/cp-test_multinode-094885-m02_multinode-094885-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 cp testdata/cp-test.txt multinode-094885-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 ssh -n multinode-094885-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 cp multinode-094885-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4009504673/001/cp-test_multinode-094885-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 ssh -n multinode-094885-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 cp multinode-094885-m03:/home/docker/cp-test.txt multinode-094885:/home/docker/cp-test_multinode-094885-m03_multinode-094885.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 ssh -n multinode-094885-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 ssh -n multinode-094885 "sudo cat /home/docker/cp-test_multinode-094885-m03_multinode-094885.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 cp multinode-094885-m03:/home/docker/cp-test.txt multinode-094885-m02:/home/docker/cp-test_multinode-094885-m03_multinode-094885-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 ssh -n multinode-094885-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 ssh -n multinode-094885-m02 "sudo cat /home/docker/cp-test_multinode-094885-m03_multinode-094885-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-094885 node stop m03: (1.467430542s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-094885 status: exit status 7 (414.512071ms)

                                                
                                                
-- stdout --
	multinode-094885
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-094885-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-094885-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-094885 status --alsologtostderr: exit status 7 (422.049976ms)

                                                
                                                
-- stdout --
	multinode-094885
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-094885-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-094885-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 20:19:06.917643  157772 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:19:06.917906  157772 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:19:06.917916  157772 out.go:304] Setting ErrFile to fd 2...
	I0731 20:19:06.917921  157772 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:19:06.918087  157772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 20:19:06.918242  157772 out.go:298] Setting JSON to false
	I0731 20:19:06.918270  157772 mustload.go:65] Loading cluster: multinode-094885
	I0731 20:19:06.918315  157772 notify.go:220] Checking for updates...
	I0731 20:19:06.918778  157772 config.go:182] Loaded profile config "multinode-094885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 20:19:06.918799  157772 status.go:255] checking status of multinode-094885 ...
	I0731 20:19:06.919250  157772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:19:06.919335  157772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:19:06.937711  157772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39179
	I0731 20:19:06.938115  157772 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:19:06.938831  157772 main.go:141] libmachine: Using API Version  1
	I0731 20:19:06.938864  157772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:19:06.939374  157772 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:19:06.939595  157772 main.go:141] libmachine: (multinode-094885) Calling .GetState
	I0731 20:19:06.941255  157772 status.go:330] multinode-094885 host status = "Running" (err=<nil>)
	I0731 20:19:06.941282  157772 host.go:66] Checking if "multinode-094885" exists ...
	I0731 20:19:06.941612  157772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:19:06.941670  157772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:19:06.957246  157772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43377
	I0731 20:19:06.957746  157772 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:19:06.958244  157772 main.go:141] libmachine: Using API Version  1
	I0731 20:19:06.958263  157772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:19:06.958545  157772 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:19:06.958733  157772 main.go:141] libmachine: (multinode-094885) Calling .GetIP
	I0731 20:19:06.961155  157772 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:19:06.961589  157772 main.go:141] libmachine: (multinode-094885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:94:53", ip: ""} in network mk-multinode-094885: {Iface:virbr1 ExpiryTime:2024-07-31 21:16:07 +0000 UTC Type:0 Mac:52:54:00:32:94:53 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-094885 Clientid:01:52:54:00:32:94:53}
	I0731 20:19:06.961623  157772 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined IP address 192.168.39.193 and MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:19:06.961726  157772 host.go:66] Checking if "multinode-094885" exists ...
	I0731 20:19:06.962063  157772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:19:06.962106  157772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:19:06.977465  157772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38841
	I0731 20:19:06.977938  157772 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:19:06.978369  157772 main.go:141] libmachine: Using API Version  1
	I0731 20:19:06.978399  157772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:19:06.978734  157772 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:19:06.978917  157772 main.go:141] libmachine: (multinode-094885) Calling .DriverName
	I0731 20:19:06.979112  157772 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:19:06.979143  157772 main.go:141] libmachine: (multinode-094885) Calling .GetSSHHostname
	I0731 20:19:06.981985  157772 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:19:06.982319  157772 main.go:141] libmachine: (multinode-094885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:94:53", ip: ""} in network mk-multinode-094885: {Iface:virbr1 ExpiryTime:2024-07-31 21:16:07 +0000 UTC Type:0 Mac:52:54:00:32:94:53 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-094885 Clientid:01:52:54:00:32:94:53}
	I0731 20:19:06.982347  157772 main.go:141] libmachine: (multinode-094885) DBG | domain multinode-094885 has defined IP address 192.168.39.193 and MAC address 52:54:00:32:94:53 in network mk-multinode-094885
	I0731 20:19:06.982558  157772 main.go:141] libmachine: (multinode-094885) Calling .GetSSHPort
	I0731 20:19:06.982740  157772 main.go:141] libmachine: (multinode-094885) Calling .GetSSHKeyPath
	I0731 20:19:06.982909  157772 main.go:141] libmachine: (multinode-094885) Calling .GetSSHUsername
	I0731 20:19:06.983064  157772 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/multinode-094885/id_rsa Username:docker}
	I0731 20:19:07.066056  157772 ssh_runner.go:195] Run: systemctl --version
	I0731 20:19:07.072398  157772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:19:07.088165  157772 kubeconfig.go:125] found "multinode-094885" server: "https://192.168.39.193:8443"
	I0731 20:19:07.088193  157772 api_server.go:166] Checking apiserver status ...
	I0731 20:19:07.088223  157772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:19:07.101436  157772 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup
	W0731 20:19:07.111287  157772 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:19:07.111339  157772 ssh_runner.go:195] Run: ls
	I0731 20:19:07.115623  157772 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8443/healthz ...
	I0731 20:19:07.122011  157772 api_server.go:279] https://192.168.39.193:8443/healthz returned 200:
	ok
	I0731 20:19:07.122037  157772 status.go:422] multinode-094885 apiserver status = Running (err=<nil>)
	I0731 20:19:07.122049  157772 status.go:257] multinode-094885 status: &{Name:multinode-094885 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 20:19:07.122068  157772 status.go:255] checking status of multinode-094885-m02 ...
	I0731 20:19:07.122360  157772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:19:07.122398  157772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:19:07.137950  157772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36401
	I0731 20:19:07.138464  157772 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:19:07.138992  157772 main.go:141] libmachine: Using API Version  1
	I0731 20:19:07.139028  157772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:19:07.139343  157772 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:19:07.139563  157772 main.go:141] libmachine: (multinode-094885-m02) Calling .GetState
	I0731 20:19:07.141042  157772 status.go:330] multinode-094885-m02 host status = "Running" (err=<nil>)
	I0731 20:19:07.141060  157772 host.go:66] Checking if "multinode-094885-m02" exists ...
	I0731 20:19:07.141422  157772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:19:07.141465  157772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:19:07.157141  157772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35607
	I0731 20:19:07.157567  157772 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:19:07.158080  157772 main.go:141] libmachine: Using API Version  1
	I0731 20:19:07.158107  157772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:19:07.158463  157772 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:19:07.158635  157772 main.go:141] libmachine: (multinode-094885-m02) Calling .GetIP
	I0731 20:19:07.161284  157772 main.go:141] libmachine: (multinode-094885-m02) DBG | domain multinode-094885-m02 has defined MAC address 52:54:00:c2:b6:bb in network mk-multinode-094885
	I0731 20:19:07.161743  157772 main.go:141] libmachine: (multinode-094885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:b6:bb", ip: ""} in network mk-multinode-094885: {Iface:virbr1 ExpiryTime:2024-07-31 21:17:21 +0000 UTC Type:0 Mac:52:54:00:c2:b6:bb Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-094885-m02 Clientid:01:52:54:00:c2:b6:bb}
	I0731 20:19:07.161772  157772 main.go:141] libmachine: (multinode-094885-m02) DBG | domain multinode-094885-m02 has defined IP address 192.168.39.211 and MAC address 52:54:00:c2:b6:bb in network mk-multinode-094885
	I0731 20:19:07.162019  157772 host.go:66] Checking if "multinode-094885-m02" exists ...
	I0731 20:19:07.162365  157772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:19:07.162406  157772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:19:07.177512  157772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44943
	I0731 20:19:07.177959  157772 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:19:07.178400  157772 main.go:141] libmachine: Using API Version  1
	I0731 20:19:07.178429  157772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:19:07.178770  157772 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:19:07.178951  157772 main.go:141] libmachine: (multinode-094885-m02) Calling .DriverName
	I0731 20:19:07.179184  157772 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:19:07.179207  157772 main.go:141] libmachine: (multinode-094885-m02) Calling .GetSSHHostname
	I0731 20:19:07.181765  157772 main.go:141] libmachine: (multinode-094885-m02) DBG | domain multinode-094885-m02 has defined MAC address 52:54:00:c2:b6:bb in network mk-multinode-094885
	I0731 20:19:07.182201  157772 main.go:141] libmachine: (multinode-094885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:b6:bb", ip: ""} in network mk-multinode-094885: {Iface:virbr1 ExpiryTime:2024-07-31 21:17:21 +0000 UTC Type:0 Mac:52:54:00:c2:b6:bb Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-094885-m02 Clientid:01:52:54:00:c2:b6:bb}
	I0731 20:19:07.182223  157772 main.go:141] libmachine: (multinode-094885-m02) DBG | domain multinode-094885-m02 has defined IP address 192.168.39.211 and MAC address 52:54:00:c2:b6:bb in network mk-multinode-094885
	I0731 20:19:07.182353  157772 main.go:141] libmachine: (multinode-094885-m02) Calling .GetSSHPort
	I0731 20:19:07.182537  157772 main.go:141] libmachine: (multinode-094885-m02) Calling .GetSSHKeyPath
	I0731 20:19:07.182705  157772 main.go:141] libmachine: (multinode-094885-m02) Calling .GetSSHUsername
	I0731 20:19:07.182810  157772 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-121704/.minikube/machines/multinode-094885-m02/id_rsa Username:docker}
	I0731 20:19:07.261119  157772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:19:07.277421  157772 status.go:257] multinode-094885-m02 status: &{Name:multinode-094885-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0731 20:19:07.277458  157772 status.go:255] checking status of multinode-094885-m03 ...
	I0731 20:19:07.277759  157772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 20:19:07.277795  157772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:19:07.293460  157772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39255
	I0731 20:19:07.293949  157772 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:19:07.294436  157772 main.go:141] libmachine: Using API Version  1
	I0731 20:19:07.294472  157772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:19:07.294812  157772 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:19:07.295045  157772 main.go:141] libmachine: (multinode-094885-m03) Calling .GetState
	I0731 20:19:07.296786  157772 status.go:330] multinode-094885-m03 host status = "Stopped" (err=<nil>)
	I0731 20:19:07.296800  157772 status.go:343] host is not running, skipping remaining checks
	I0731 20:19:07.296805  157772 status.go:257] multinode-094885-m03 status: &{Name:multinode-094885-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.30s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-094885 node start m03 -v=7 --alsologtostderr: (39.948253155s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-094885 node delete m03: (1.620736288s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (181.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-094885 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0731 20:30:09.825819  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-094885 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m1.473938951s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-094885 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (181.99s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-094885
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-094885-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-094885-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (67.638165ms)

                                                
                                                
-- stdout --
	* [multinode-094885-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19355
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-094885-m02' is duplicated with machine name 'multinode-094885-m02' in profile 'multinode-094885'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-094885-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-094885-m03 --driver=kvm2  --container-runtime=crio: (43.580201188s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-094885
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-094885: exit status 80 (223.374652ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-094885 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-094885-m03 already exists in multinode-094885-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-094885-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.68s)

                                                
                                    
x
+
TestScheduledStopUnix (117.21s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-364578 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-364578 --memory=2048 --driver=kvm2  --container-runtime=crio: (45.661697663s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-364578 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-364578 -n scheduled-stop-364578
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-364578 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-364578 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-364578 -n scheduled-stop-364578
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-364578
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-364578 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0731 20:37:34.577402  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-364578
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-364578: exit status 7 (64.604213ms)

                                                
                                                
-- stdout --
	scheduled-stop-364578
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-364578 -n scheduled-stop-364578
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-364578 -n scheduled-stop-364578: exit status 7 (64.395956ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-364578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-364578
--- PASS: TestScheduledStopUnix (117.21s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (205.52s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1016425457 start -p running-upgrade-437728 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0731 20:40:09.825620  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1016425457 start -p running-upgrade-437728 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m46.857242992s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-437728 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-437728 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m35.01186184s)
helpers_test.go:175: Cleaning up "running-upgrade-437728" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-437728
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-437728: (1.028199423s)
--- PASS: TestRunningBinaryUpgrade (205.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-938926 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-938926 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (73.329328ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-938926] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19355
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (72.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-938926 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-938926 --driver=kvm2  --container-runtime=crio: (1m11.754709247s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-938926 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (72.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-341849 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-341849 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (98.203751ms)

                                                
                                                
-- stdout --
	* [false-341849] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19355
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 20:37:50.664767  165674 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:37:50.665014  165674 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:37:50.665022  165674 out.go:304] Setting ErrFile to fd 2...
	I0731 20:37:50.665026  165674 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:37:50.665185  165674 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-121704/.minikube/bin
	I0731 20:37:50.665770  165674 out.go:298] Setting JSON to false
	I0731 20:37:50.666629  165674 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8407,"bootTime":1722449864,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 20:37:50.666689  165674 start.go:139] virtualization: kvm guest
	I0731 20:37:50.668701  165674 out.go:177] * [false-341849] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 20:37:50.670048  165674 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 20:37:50.670086  165674 notify.go:220] Checking for updates...
	I0731 20:37:50.672877  165674 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 20:37:50.674439  165674 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-121704/kubeconfig
	I0731 20:37:50.675733  165674 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-121704/.minikube
	I0731 20:37:50.677060  165674 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 20:37:50.678477  165674 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 20:37:50.680074  165674 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 20:37:50.714231  165674 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 20:37:50.715452  165674 start.go:297] selected driver: kvm2
	I0731 20:37:50.715461  165674 start.go:901] validating driver "kvm2" against <nil>
	I0731 20:37:50.715481  165674 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 20:37:50.717880  165674 out.go:177] 
	W0731 20:37:50.719301  165674 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0731 20:37:50.720863  165674 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-341849 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-341849

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-341849

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-341849

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-341849

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-341849

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-341849

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-341849

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-341849

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-341849

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-341849

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341849"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341849"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341849"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-341849

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341849"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341849"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-341849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-341849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-341849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-341849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-341849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-341849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-341849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-341849" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341849"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341849"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341849"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341849"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341849"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-341849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-341849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-341849" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341849"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341849"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341849"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341849"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341849"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-341849

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341849"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341849"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341849"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341849"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341849"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341849"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341849"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341849"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341849"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341849"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341849"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341849"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341849"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341849"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341849"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341849"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341849"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-341849"

                                                
                                                
----------------------- debugLogs end: false-341849 [took: 2.684727447s] --------------------------------
helpers_test.go:175: Cleaning up "false-341849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-341849
--- PASS: TestNetworkPlugins/group/false (2.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (45.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-938926 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-938926 --no-kubernetes --driver=kvm2  --container-runtime=crio: (44.738287504s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-938926 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-938926 status -o json: exit status 2 (217.487558ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-938926","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-938926
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (45.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (72.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-938926 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0731 20:39:52.871933  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-938926 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m12.032862486s)
--- PASS: TestNoKubernetes/serial/Start (72.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-938926 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-938926 "sudo systemctl is-active --quiet service kubelet": exit status 1 (198.345855ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-938926
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-938926: (1.298188399s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (65.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-938926 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-938926 --driver=kvm2  --container-runtime=crio: (1m5.301366904s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (65.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-938926 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-938926 "sudo systemctl is-active --quiet service kubelet": exit status 1 (197.06933ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (114.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3548901160 start -p stopped-upgrade-358493 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0731 20:42:34.578229  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3548901160 start -p stopped-upgrade-358493 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (50.11386034s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3548901160 -p stopped-upgrade-358493 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3548901160 -p stopped-upgrade-358493 stop: (1.466730069s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-358493 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-358493 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m3.399697337s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (114.98s)

                                                
                                    
x
+
TestPause/serial/Start (58.21s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-809955 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-809955 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (58.206585323s)
--- PASS: TestPause/serial/Start (58.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-358493
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-358493: (1.082495491s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (75.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-341849 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-341849 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m15.505889051s)
--- PASS: TestNetworkPlugins/group/auto/Start (75.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (102.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-341849 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-341849 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m42.40591502s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (102.41s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (78.22s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-809955 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0731 20:45:09.825193  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-809955 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m18.185486988s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (78.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-341849 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-341849 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-sf7kp" [26921ff0-c365-476a-b394-a577b359a41a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-sf7kp" [26921ff0-c365-476a-b394-a577b359a41a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004194491s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-341849 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-341849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-341849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestPause/serial/Pause (0.79s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-809955 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.79s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-809955 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-809955 --output=json --layout=cluster: exit status 2 (274.175692ms)

                                                
                                                
-- stdout --
	{"Name":"pause-809955","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-809955","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.27s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-809955 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.72s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.88s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-809955 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.88s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.02s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-809955 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-809955 --alsologtostderr -v=5: (1.015828708s)
--- PASS: TestPause/serial/DeletePaused (1.02s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.55s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (90.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-341849 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-341849 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m30.786933342s)
--- PASS: TestNetworkPlugins/group/calico/Start (90.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (101.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-341849 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-341849 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m41.876655824s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (101.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-rpqjb" [af4aea54-8387-495c-a243-9d6a8cf6e657] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.007080775s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-341849 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-341849 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-hkl4d" [983d255c-e4bb-4fdb-b876-aecbad5a21bb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-hkl4d" [983d255c-e4bb-4fdb-b876-aecbad5a21bb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004434449s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-341849 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-341849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-341849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (105.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-341849 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-341849 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m45.633634012s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (105.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-k6hls" [424bb92e-d329-423f-8ae3-0a80bad19466] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005533141s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-341849 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-341849 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-r7ghq" [f62a0905-8cdc-43b5-aa3d-83f6c88eb29f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-r7ghq" [f62a0905-8cdc-43b5-aa3d-83f6c88eb29f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.325841214s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-341849 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-341849 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context custom-flannel-341849 replace --force -f testdata/netcat-deployment.yaml: (2.364756786s)
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-7rjf4" [23636447-040d-4a6c-aade-935ad18f7457] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0731 20:47:34.577636  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-7rjf4" [23636447-040d-4a6c-aade-935ad18f7457] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004897831s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-341849 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-341849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-341849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-341849 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-341849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-341849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (86.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-341849 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-341849 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m26.652440804s)
--- PASS: TestNetworkPlugins/group/flannel/Start (86.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (85.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-341849 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-341849 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m25.875566241s)
--- PASS: TestNetworkPlugins/group/bridge/Start (85.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-341849 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-341849 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-tt8r2" [69d90324-b230-4889-afbb-9c102bdce9af] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-tt8r2" [69d90324-b230-4889-afbb-9c102bdce9af] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003637364s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-341849 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-341849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-341849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (165.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-916885 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0731 20:48:57.627222  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/addons-715925/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-916885 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (2m45.592094116s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (165.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-v4kxd" [1702801c-a7c9-4cfd-a3cc-76b751a8845b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006499914s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-341849 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (15.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-341849 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-77n8q" [c09329d9-555c-4dd6-98c3-068921cc6d4e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-77n8q" [c09329d9-555c-4dd6-98c3-068921cc6d4e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 15.004139355s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (15.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-341849 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-341849 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-67t2f" [4cd436e6-7b4f-4fce-8409-6f5d0a2aa4c5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-67t2f" [4cd436e6-7b4f-4fce-8409-6f5d0a2aa4c5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003481697s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-341849 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-341849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-341849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-341849 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-341849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-341849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (64.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-831240 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-831240 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (1m4.219241999s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (64.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (98.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-125614 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0731 20:50:09.825113  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
E0731 20:50:21.067793  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/auto-341849/client.crt: no such file or directory
E0731 20:50:21.073143  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/auto-341849/client.crt: no such file or directory
E0731 20:50:21.083484  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/auto-341849/client.crt: no such file or directory
E0731 20:50:21.103804  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/auto-341849/client.crt: no such file or directory
E0731 20:50:21.144145  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/auto-341849/client.crt: no such file or directory
E0731 20:50:21.224595  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/auto-341849/client.crt: no such file or directory
E0731 20:50:21.385232  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/auto-341849/client.crt: no such file or directory
E0731 20:50:21.705565  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/auto-341849/client.crt: no such file or directory
E0731 20:50:22.346665  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/auto-341849/client.crt: no such file or directory
E0731 20:50:23.627719  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/auto-341849/client.crt: no such file or directory
E0731 20:50:26.188401  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/auto-341849/client.crt: no such file or directory
E0731 20:50:31.308932  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/auto-341849/client.crt: no such file or directory
E0731 20:50:41.549768  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/auto-341849/client.crt: no such file or directory
E0731 20:50:48.407534  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kindnet-341849/client.crt: no such file or directory
E0731 20:50:48.413154  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kindnet-341849/client.crt: no such file or directory
E0731 20:50:48.423448  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kindnet-341849/client.crt: no such file or directory
E0731 20:50:48.443811  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kindnet-341849/client.crt: no such file or directory
E0731 20:50:48.484953  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kindnet-341849/client.crt: no such file or directory
E0731 20:50:48.565112  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kindnet-341849/client.crt: no such file or directory
E0731 20:50:48.725542  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kindnet-341849/client.crt: no such file or directory
E0731 20:50:49.046508  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kindnet-341849/client.crt: no such file or directory
E0731 20:50:49.686643  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kindnet-341849/client.crt: no such file or directory
E0731 20:50:50.967532  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kindnet-341849/client.crt: no such file or directory
E0731 20:50:53.528284  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kindnet-341849/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-125614 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (1m38.466992371s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (98.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-831240 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e9dc9efd-eba1-4457-8c17-44c18ddc2986] Pending
helpers_test.go:344: "busybox" [e9dc9efd-eba1-4457-8c17-44c18ddc2986] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0731 20:50:58.649079  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kindnet-341849/client.crt: no such file or directory
E0731 20:51:02.030474  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/auto-341849/client.crt: no such file or directory
helpers_test.go:344: "busybox" [e9dc9efd-eba1-4457-8c17-44c18ddc2986] Running
E0731 20:51:08.890180  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kindnet-341849/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 12.004163021s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-831240 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-831240 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-831240 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-916885 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [80644401-888f-4d8b-9746-2339d434f682] Pending
helpers_test.go:344: "busybox" [80644401-888f-4d8b-9746-2339d434f682] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [80644401-888f-4d8b-9746-2339d434f682] Running
E0731 20:51:29.370512  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/kindnet-341849/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004852069s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-916885 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-916885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-916885 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-125614 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5df1bbfb-71e6-41df-a194-4eecaf14017f] Pending
helpers_test.go:344: "busybox" [5df1bbfb-71e6-41df-a194-4eecaf14017f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5df1bbfb-71e6-41df-a194-4eecaf14017f] Running
E0731 20:51:42.991582  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/auto-341849/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003806164s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-125614 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-125614 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-125614 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (644.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-831240 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-831240 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (10m43.798163248s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-831240 -n embed-certs-831240
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (644.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (602.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-916885 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-916885 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (10m2.625718584s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-916885 -n no-preload-916885
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (602.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (562.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-125614 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0731 20:54:15.994926  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/flannel-341849/client.crt: no such file or directory
E0731 20:54:18.555367  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/flannel-341849/client.crt: no such file or directory
E0731 20:54:23.676076  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/flannel-341849/client.crt: no such file or directory
E0731 20:54:26.619683  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/bridge-341849/client.crt: no such file or directory
E0731 20:54:26.625101  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/bridge-341849/client.crt: no such file or directory
E0731 20:54:26.635354  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/bridge-341849/client.crt: no such file or directory
E0731 20:54:26.655644  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/bridge-341849/client.crt: no such file or directory
E0731 20:54:26.695984  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/bridge-341849/client.crt: no such file or directory
E0731 20:54:26.776307  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/bridge-341849/client.crt: no such file or directory
E0731 20:54:26.936656  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/bridge-341849/client.crt: no such file or directory
E0731 20:54:27.257269  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/bridge-341849/client.crt: no such file or directory
E0731 20:54:27.898373  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/bridge-341849/client.crt: no such file or directory
E0731 20:54:29.179543  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/bridge-341849/client.crt: no such file or directory
E0731 20:54:30.220022  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/enable-default-cni-341849/client.crt: no such file or directory
E0731 20:54:31.740203  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/bridge-341849/client.crt: no such file or directory
E0731 20:54:33.916625  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/flannel-341849/client.crt: no such file or directory
E0731 20:54:36.861302  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/bridge-341849/client.crt: no such file or directory
E0731 20:54:47.101697  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/bridge-341849/client.crt: no such file or directory
E0731 20:54:54.397064  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/flannel-341849/client.crt: no such file or directory
E0731 20:54:55.781409  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/calico-341849/client.crt: no such file or directory
E0731 20:55:07.582549  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/bridge-341849/client.crt: no such file or directory
E0731 20:55:09.825439  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/functional-904202/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-125614 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (9m22.342576518s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-125614 -n default-k8s-diff-port-125614
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (562.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-239115 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-239115 --alsologtostderr -v=3: (1.355871836s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-239115 -n old-k8s-version-239115
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-239115 -n old-k8s-version-239115: exit status 7 (64.424649ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-239115 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-586791 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0731 21:18:08.295048  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/enable-default-cni-341849/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-586791 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (49.518868663s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-586791 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-586791 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-586791 --alsologtostderr -v=3: (7.303040839s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-586791 -n newest-cni-586791
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-586791 -n newest-cni-586791: exit status 7 (66.695885ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-586791 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-586791 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0731 21:19:13.435336  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/flannel-341849/client.crt: no such file or directory
E0731 21:19:26.620404  128891 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-121704/.minikube/profiles/bridge-341849/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-586791 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (36.571132026s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-586791 -n newest-cni-586791
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-586791 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-f6ad1f6e
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-586791 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-586791 -n newest-cni-586791
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-586791 -n newest-cni-586791: exit status 2 (244.254365ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-586791 -n newest-cni-586791
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-586791 -n newest-cni-586791: exit status 2 (245.651672ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-586791 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-586791 -n newest-cni-586791
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-586791 -n newest-cni-586791
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.44s)

                                                
                                    

Test skip (40/326)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.31.0-beta.0/binaries 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0
47 TestAddons/parallel/Olm 0
57 TestDockerFlags 0
60 TestDockerEnvContainerd 0
62 TestHyperKitDriverInstallOrUpdate 0
63 TestHyperkitDriverSkipUpgrade 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
140 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
163 TestGvisorAddon 0
185 TestImageBuild 0
212 TestKicCustomNetwork 0
213 TestKicExistingNetwork 0
214 TestKicCustomSubnet 0
215 TestKicStaticIP 0
247 TestChangeNoneUser 0
250 TestScheduledStopWindows 0
252 TestSkaffold 0
254 TestInsufficientStorage 0
258 TestMissingContainerUpgrade 0
263 TestNetworkPlugins/group/kubenet 2.79
272 TestNetworkPlugins/group/cilium 3.23
287 TestStartStop/group/disable-driver-mounts 0.15
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-341849 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-341849

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-341849

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-341849

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-341849

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-341849

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-341849

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-341849

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-341849

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-341849

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-341849

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341849"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341849"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341849"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-341849

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341849"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341849"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-341849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-341849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-341849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-341849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-341849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-341849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-341849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-341849" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341849"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341849"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341849"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341849"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341849"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-341849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-341849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-341849" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341849"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341849"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341849"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341849"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341849"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-341849

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341849"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341849"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341849"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341849"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341849"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341849"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341849"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341849"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341849"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341849"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341849"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341849"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341849"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341849"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341849"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341849"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341849"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-341849"

                                                
                                                
----------------------- debugLogs end: kubenet-341849 [took: 2.64895581s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-341849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-341849
--- SKIP: TestNetworkPlugins/group/kubenet (2.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-341849 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-341849

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-341849

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-341849

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-341849

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-341849

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-341849

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-341849

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-341849

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-341849

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-341849

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341849"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341849"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341849"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-341849

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341849"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341849"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-341849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-341849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-341849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-341849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-341849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-341849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-341849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-341849" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341849"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341849"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341849"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341849"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341849"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-341849

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-341849

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-341849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-341849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-341849

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-341849

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-341849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-341849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-341849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-341849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-341849" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341849"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341849"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341849"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341849"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341849"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-341849

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341849"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341849"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341849"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341849"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341849"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341849"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341849"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341849"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341849"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341849"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341849"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341849"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341849"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341849"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341849"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341849"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341849"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-341849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-341849"

                                                
                                                
----------------------- debugLogs end: cilium-341849 [took: 3.089429365s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-341849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-341849
--- SKIP: TestNetworkPlugins/group/cilium (3.23s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-248084" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-248084
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard